path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
mturk/python/MTurk.ipynb | ###Markdown
Annotating Training Data With MTurk Pre-requisitesIf you haven't already, you'll need to setup MTurk and AWS accounts that are linked together to use MTurk with Python. The MTurk account will be used to post tasks to the MTurk crowd and the AWS accounts will be used to connect to MTurk via API and provide access to any additional AWS resources that are needed to execute your task.1. If you don't have an AWS account already, visit https://aws.amazon.com and create an account you can use for your project.2. If you don't have an MTurk Requester account already, visit https://requester.mturk.com and create a new account.After you've setup your accounts, you will need to link them together. When logged into both the root of your AWS account and your MTurk account, visit https://requester.mturk.com/developer to link them together.From your AWS console create a new AWS IAM User or select an existing one you plan to use. Add the AmazonMechanicalTurkFullAccess policy to your user. Then select the Security Credentials tab and create a new Access Key, copy the Access Key and Secret Access Key for future use.If you haven't installed the awscli yet, install it with pip (pip install awscli) and configure a profile using the access key and secret key above (aws configure --profile mturk). To post tasks to MTurk for Workers to complete you will first need to add funds to your account that will be used to reward Workers. Visit https://requester.mturk.com/account to get started with as little as $1.00.We also recommend installing xmltodict as shown below.
###Code
!pip install boto3
!pip install xmltodict
###Output
Requirement already satisfied: xmltodict in /Users/mm06682/.prefix/sw/miniconda/lib/python3.7/site-packages (0.12.0)
###Markdown
OverviewAmazon Mechanical Turk allows you to post tasks for Workers to complete at https://worker.mturk.com. To post a task toMTurk you create an HTML form that includes the information you want them to provide. In this example we'll be asking Workers to rate the sentiment of Tweets on a scale of 1 (negative) to 10 (positive).MTurk has a Sandbox environment that can be used for testing. Workers won't work see your tasks in the Sandbox but you can log in to do them yourself to test the task interface at https://workersandbox.mturk.com. It's recommended you test first in the Sandbox to make sure your task returns the data you need before moving to the Production environment. There is no cost to use the Sandbox environment.
###Code
import boto3
import xmltodict
import json
import os
from datetime import datetime
import random
import pandas as pd
import csv
from IPython.display import clear_output
from time import sleep
import glob
create_hits_in_production = False
environments = {
"production": {
"endpoint": "https://mturk-requester.us-east-1.amazonaws.com",
"preview": "https://www.mturk.com/mturk/preview"
},
"sandbox": {
"endpoint": "https://mturk-requester-sandbox.us-east-1.amazonaws.com",
"preview": "https://workersandbox.mturk.com/mturk/preview"
},
}
mturk_environment = environments["production"] if create_hits_in_production else environments["sandbox"]
session = boto3.Session(profile_name='mturk')
client = session.client(
service_name='mturk',
region_name='us-east-1',
endpoint_url=mturk_environment['endpoint'],
)
# This will return your current MTurk balance if you are connected to Production.
# If you are connected to the Sandbox it will return $10,000.
print(client.get_account_balance()['AvailableBalance'])
###Output
10000.00
###Markdown
Define your taskFor this project we are going to get the sentiment of a set of tweets that we plan to train a model to evaluate. We will create an MTurk Human Intelligence Task (HIT) for each tweet. Handle the combination of pics
###Code
survey_groups = pd.read_csv('/Users/mm06682/projects/school_projects/fall_2019/software_engineering/google-ad-bias-research/mturk/python/temp_groups.csv')
#imagePath = "https://my-image-repo-520.s3.amazonaws.com/uploads"
imagePath = "https://ad-page-image-repo.s3.us-east-2.amazonaws.com"
# Preview the first 5 lines of the loaded data
#t = survey_groups[10:25]
t = survey_groups
for index, row in t.iterrows():
for i, col in enumerate(survey_groups.columns):
imgName = row[col]
print(index, i,row[col],"{}/{}".format(imagePath,imgName.strip().replace(" ","+")))
###Output
0 0 Bustle - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Bustle+-+Sephora.png
0 1 ESPN - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPN+-+Sephora.png
0 2 ESPNW - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPNW+-+Sephora.png
0 3 GMP - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GMP+-+Sephora.png
0 4 GQ - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GQ+-+Sephora.png
0 5 Healthline - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Healthline+-+Sephora.png
0 6 NYTimes - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//NYTimes+-+Sephora.png
0 7 R29 - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//R29+-+Sephora.png
0 8 WaPo - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//WaPo+-+Sephora.png
1 0 Bustle - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Bustle+-+Sephora.png
1 1 ESPN - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPN+-+Sephora.png
1 2 ESPNW - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPNW+-+Sephora.png
1 3 GMP - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GMP+-+Sephora.png
1 4 GQ - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GQ+-+Sephora.png
1 5 Healthline - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Healthline+-+Sephora.png
1 6 NYTimes - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//NYTimes+-+Sephora.png
1 7 R29 - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//R29+-+Sephora.png
1 8 WaPo - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//WaPo+-+Sephora.png
2 0 Bustle - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Bustle+-+Sephora.png
2 1 ESPN - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPN+-+Sephora.png
2 2 ESPNW - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPNW+-+Sephora.png
2 3 GMP - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GMP+-+Sephora.png
2 4 GQ - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GQ+-+Sephora.png
2 5 Healthline - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Healthline+-+Sephora.png
2 6 NYTimes - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//NYTimes+-+Sephora.png
2 7 R29 - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//R29+-+Sephora.png
2 8 WaPo - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//WaPo+-+Sephora.png
3 0 Bustle - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Bustle+-+Sephora.png
3 1 ESPN - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPN+-+Sephora.png
3 2 ESPNW - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPNW+-+Sephora.png
3 3 GMP - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GMP+-+Sephora.png
3 4 GQ - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GQ+-+Sephora.png
3 5 Healthline - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Healthline+-+Sephora.png
3 6 NYTimes - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//NYTimes+-+Sephora.png
3 7 R29 - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//R29+-+Sephora.png
3 8 WaPo - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//WaPo+-+Sephora.png
4 0 Bustle - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Bustle+-+Sephora.png
4 1 ESPN - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPN+-+Sephora.png
4 2 ESPNW - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPNW+-+Sephora.png
4 3 GMP - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GMP+-+Sephora.png
4 4 GQ - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GQ+-+Sephora.png
4 5 Healthline - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Healthline+-+Sephora.png
4 6 NYTimes - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//NYTimes+-+Sephora.png
4 7 R29 - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//R29+-+Sephora.png
4 8 WaPo - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//WaPo+-+Sephora.png
5 0 Bustle - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Bustle+-+Sephora.png
5 1 ESPN - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPN+-+Sephora.png
5 2 ESPNW - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPNW+-+Sephora.png
5 3 GMP - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GMP+-+Sephora.png
5 4 GQ - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GQ+-+Sephora.png
5 5 Healthline - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Healthline+-+Sephora.png
5 6 NYTimes - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//NYTimes+-+Sephora.png
5 7 R29 - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//R29+-+Sephora.png
5 8 WaPo - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//WaPo+-+Sephora.png
6 0 Bustle - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Bustle+-+Sephora.png
6 1 ESPN - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPN+-+Sephora.png
6 2 ESPNW - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPNW+-+Sephora.png
6 3 GMP - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GMP+-+Sephora.png
6 4 GQ - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GQ+-+Sephora.png
6 5 Healthline - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Healthline+-+Sephora.png
6 6 NYTimes - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//NYTimes+-+Sephora.png
6 7 R29 - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//R29+-+Sephora.png
6 8 WaPo - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//WaPo+-+Sephora.png
7 0 Bustle - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Bustle+-+Sephora.png
7 1 ESPN - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPN+-+Sephora.png
7 2 ESPNW - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPNW+-+Sephora.png
7 3 GMP - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GMP+-+Sephora.png
7 4 GQ - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GQ+-+Sephora.png
7 5 Healthline - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Healthline+-+Sephora.png
7 6 NYTimes - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//NYTimes+-+Sephora.png
7 7 R29 - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//R29+-+Sephora.png
7 8 WaPo - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//WaPo+-+Sephora.png
8 0 Bustle - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Bustle+-+Sephora.png
8 1 ESPN - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPN+-+Sephora.png
8 2 ESPNW - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//ESPNW+-+Sephora.png
8 3 GMP - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GMP+-+Sephora.png
8 4 GQ - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//GQ+-+Sephora.png
8 5 Healthline - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//Healthline+-+Sephora.png
8 6 NYTimes - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//NYTimes+-+Sephora.png
8 7 R29 - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//R29+-+Sephora.png
8 8 WaPo - Sephora.png https://ad-page-image-repo.s3.us-east-2.amazonaws.com//WaPo+-+Sephora.png
###Markdown
MTurk accepts an XML document containing the HTML that will be displayed to Workers. Workers will see these HTML for each item tweet that is submitted. To use the HTML for this example task, download it from [here](https://s3.amazonaws.com/mturk/samples/jupyter-examples/SentimentQuestion.html) and store it in the same directory as this notebook. Within the HTML is a variable ${content} that will be replaced with a different tweet when the HIT is created.Here the HTML is loaded and inserted into the XML Document.
###Code
html_layout = open('/Users/mm06682/projects/school_projects/fall_2019/software_engineering/google-ad-bias-research/mturk/python/survey.html', 'r',encoding="utf-8").read()
QUESTION_XML = """<HTMLQuestion xmlns="http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2011-11-11/HTMLQuestion.xsd">
<HTMLContent><![CDATA[{}]]></HTMLContent>
<FrameHeight>650</FrameHeight>
</HTMLQuestion>"""
question_xml = QUESTION_XML.format(html_layout)
###Output
_____no_output_____
###Markdown
In Mechanical Turk each task is representated by a Human Intelligence Task (HIT) which is an individual item you want annotated by one or more Workers and the interface that should be displayed. The definition below requests that five Workers review each item, that the HIT remain live on the worker.mturk.com website for no more than an hour, and that Workers provide a response for each item in less than ten minutes. Each response has a reward of \$0.05 so the total Worker reward for this task would be \$0.25 plus \$0.05 in MTurk fees. An appropriate title, description, keywords are also provided to let Workers know what is involved in this task.
###Code
TaskAttributes = {
'MaxAssignments': 9,
'LifetimeInSeconds': 60*60*24*7, # How long the task will be available on the MTurk website (7 days)
'AssignmentDurationInSeconds': 60*60*2, # How long Workers have to complete each item (2 Hours)
'Reward': '0.50', # The reward you will offer Workers for each response
'Title': 'Answer questions about ads',
'Keywords': 'survey, ad, webpage, questionnaire',
'Description': 'Rate the relevancy of an ad to a webpage from 1 to 5',
'QualificationRequirements': [
{
'QualificationTypeId': '00000000000000000071',
'Comparator': 'EqualTo',
'LocaleValues': [
{
'Country': 'US',
},
]
}
]
}
###Output
_____no_output_____
###Markdown
Create the tasksHere a HIT is created for each tweet so that it can be completed by Workers. Prior to creating the HIT, the tweet is inserted into the Question XML content. The HIT Id returned for each task is stored in a results array so that we can retrieve the results later.
###Code
survey_groups = pd.read_csv('/Users/mm06682/projects/school_projects/fall_2019/software_engineering/google-ad-bias-research/mturk/python/temp_groups.csv')
results = []
hit_type_id = ''
numberOFImage = 9
#imagePath = "https://my-image-repo-520.s3.amazonaws.com/uploads"
imagePath = "https://ad-page-image-repo.s3.us-east-2.amazonaws.com/uploads"
#slicedData = survey_groups[70:90]
slicedData = survey_groups
for index, row in slicedData.iterrows():
result = {}
question = question_xml
for i, col in enumerate(survey_groups.columns):
imgName = row[col]
#to_split = imgName.replace('.png', '').replace('.PNG', '').replace('Page2', '').replace('Page', '').replace('Ad', '').replace('Book', 'NYT').replace('Food', 'NYT').replace('Fasion', 'Fashion').replace('bp', 'BP').replace('Impeach', 'Impeachment')
to_split = imgName.replace('.png','')
pageName, adName = to_split.split(' - ')
question = question.replace('${{url_{0}}}'.format(i+1), "{}/{}".format(imagePath, imgName.strip().replace(" ", "+")))
question = question.replace('${{website_{0}}}'.format(i+1), pageName.strip())
question = question.replace('${{ad_name_{0}}}'.format(i+1), adName.strip())
result['image{}'.format(i+1)] = imgName
response = client.create_hit(
**TaskAttributes,
Question = question
)
print(index+1)
hit_type_id = response['HIT']['HITGroupId']
result['id'] = index + 1
result['hit_id'] = response['HIT']['HITId']
results.append(result)
print("You can view the HITs here:")
print(mturk_environment['preview'] + "?groupId={}".format(hit_type_id))
if not os.path.exists("result/"):
os.makedirs("result/")
now = datetime.now()
dt_string = now.strftime("%d-%m-%Y-%H-%M-%S")
with open('result/result-{}.json'.format(dt_string), 'w') as outfile:
json.dump(results, outfile)
###Output
1
2
3
4
5
6
7
8
9
You can view the HITs here:
https://workersandbox.mturk.com/mturk/preview?groupId=3Z18Y091UA8RDCEEAQFHJ1EPO8KF1Y
###Markdown
Block workers
###Code
if os.path.exists('workerIDs.json'):
with open('workerIDs.json') as json_file:
workerIDs = json.load(json_file)
for wid in workerIDs:
response = client.create_worker_block(
WorkerId=wid,
Reason='You already did this HIT.'
)
###Output
_____no_output_____
###Markdown
Delete worker block
###Code
workerId = ""
response = client.create_worker_block(
WorkerId=workerId,
Reason='You are not block anymore.'
)
results
###Output
_____no_output_____
###Markdown
Get ResultsDepending on the task, results will be available anywhere from a few minutes to a few hours. Here we retrieve the status of each HIT and the responses that have been provided by Workers.
###Code
def getAnsewer(answerDict):
answer ={}
for ans in answer_dict['QuestionFormAnswers']['Answer']:
if ans['QuestionIdentifier'] == "age":
answer["age"] = ans["FreeText"]
elif ans['QuestionIdentifier'] == "gender":
answer["gender"] = ans["FreeText"]
elif ans['QuestionIdentifier'] == "race":
answer["race"] = ans["FreeText"]
elif ans['QuestionIdentifier'] == "zipCode":
answer["zipCode"] = ans["FreeText"]
elif ans['QuestionIdentifier'] == "Hispanic":
answer["Hispanic"] = ans["FreeText"]
elif ans['QuestionIdentifier'] == "education":
answer["education"] = ans["FreeText"]
elif ans['QuestionIdentifier'] == "occupation":
answer["occupation"] = ans["FreeText"]
elif ans['QuestionIdentifier'] == "Political":
answer["Political"] = ans["FreeText"]
elif ans['QuestionIdentifier'] == "d1.strong_disagree" and ans["FreeText"] == "true":
answer["feelAboutAd"] = 1
elif ans['QuestionIdentifier'] == "d1.disagree" and ans["FreeText"] == "true":
answer["feelAboutAd"] = 2
elif ans['QuestionIdentifier'] == "d1.Unsure" and ans["FreeText"] == "true":
answer["feelAboutAd"] = 3
elif ans['QuestionIdentifier'] == "d1.agree" and ans["FreeText"] == "true":
answer["feelAboutAd"] = 4
elif ans['QuestionIdentifier'] == "d1.strong_agree" and ans["FreeText"] == "true":
answer["feelAboutAd"] = 5
elif ans['QuestionIdentifier'] == "q1.1" and ans["FreeText"] == "true":
answer["q1"] = 1
elif ans['QuestionIdentifier'] == "q1.2" and ans["FreeText"] == "true":
answer["q1"] = 2
elif ans['QuestionIdentifier'] == "q1.3" and ans["FreeText"] == "true":
answer["q1"] = 3
elif ans['QuestionIdentifier'] == "q1.4" and ans["FreeText"] == "true":
answer["q1"] = 4
elif ans['QuestionIdentifier'] == "q1.5" and ans["FreeText"] == "true":
answer["q1"] = 5
elif ans['QuestionIdentifier'] == "q2.1" and ans["FreeText"] == "true":
answer["q2"] = 1
elif ans['QuestionIdentifier'] == "q2.2" and ans["FreeText"] == "true":
answer["q2"] = 2
elif ans['QuestionIdentifier'] == "q2.3" and ans["FreeText"] == "true":
answer["q2"] = 3
elif ans['QuestionIdentifier'] == "q2.4" and ans["FreeText"] == "true":
answer["q2"] = 4
elif ans['QuestionIdentifier'] == "q2.5" and ans["FreeText"] == "true":
answer["q2"] = 5
elif ans['QuestionIdentifier'] == "q3.1" and ans["FreeText"] == "true":
answer["q3"] = 1
elif ans['QuestionIdentifier'] == "q3.2" and ans["FreeText"] == "true":
answer["q3"] = 2
elif ans['QuestionIdentifier'] == "q3.3" and ans["FreeText"] == "true":
answer["q3"] = 3
elif ans['QuestionIdentifier'] == "q3.4" and ans["FreeText"] == "true":
answer["q3"] = 4
elif ans['QuestionIdentifier'] == "q3.5" and ans["FreeText"] == "true":
answer["q3"] = 5
elif ans['QuestionIdentifier'] == "q4.1" and ans["FreeText"] == "true":
answer["q4"] = 1
elif ans['QuestionIdentifier'] == "q4.2" and ans["FreeText"] == "true":
answer["q4"] = 2
elif ans['QuestionIdentifier'] == "q4.3" and ans["FreeText"] == "true":
answer["q4"] = 3
elif ans['QuestionIdentifier'] == "q4.4" and ans["FreeText"] == "true":
answer["q4"] = 4
elif ans['QuestionIdentifier'] == "q4.5" and ans["FreeText"] == "true":
answer["q4"] = 5
elif ans['QuestionIdentifier'] == "q5.1" and ans["FreeText"] == "true":
answer["q5"] = 1
elif ans['QuestionIdentifier'] == "q5.2" and ans["FreeText"] == "true":
answer["q5"] = 2
elif ans['QuestionIdentifier'] == "q5.3" and ans["FreeText"] == "true":
answer["q5"] = 3
elif ans['QuestionIdentifier'] == "q5.4" and ans["FreeText"] == "true":
answer["q5"] = 4
elif ans['QuestionIdentifier'] == "q5.5" and ans["FreeText"] == "true":
answer["q5"] = 5
elif ans['QuestionIdentifier'] == "q6.1" and ans["FreeText"] == "true":
answer["q6"] = 1
elif ans['QuestionIdentifier'] == "q6.2" and ans["FreeText"] == "true":
answer["q6"] = 2
elif ans['QuestionIdentifier'] == "q6.3" and ans["FreeText"] == "true":
answer["q6"] = 3
elif ans['QuestionIdentifier'] == "q6.4" and ans["FreeText"] == "true":
answer["q6"] = 4
elif ans['QuestionIdentifier'] == "q6.5" and ans["FreeText"] == "true":
answer["q6"] = 5
elif ans['QuestionIdentifier'] == "q7.1" and ans["FreeText"] == "true":
answer["q7"] = 1
elif ans['QuestionIdentifier'] == "q7.2" and ans["FreeText"] == "true":
answer["q7"] = 2
elif ans['QuestionIdentifier'] == "q7.3" and ans["FreeText"] == "true":
answer["q7"] = 3
elif ans['QuestionIdentifier'] == "q7.4" and ans["FreeText"] == "true":
answer["q7"] = 4
elif ans['QuestionIdentifier'] == "q7.5" and ans["FreeText"] == "true":
answer["q7"] = 5
elif ans['QuestionIdentifier'] == "q8.1" and ans["FreeText"] == "true":
answer["q8"] = 1
elif ans['QuestionIdentifier'] == "q8.2" and ans["FreeText"] == "true":
answer["q8"] = 2
elif ans['QuestionIdentifier'] == "q8.3" and ans["FreeText"] == "true":
answer["q8"] = 3
elif ans['QuestionIdentifier'] == "q8.4" and ans["FreeText"] == "true":
answer["q8"] = 4
elif ans['QuestionIdentifier'] == "q8.5" and ans["FreeText"] == "true":
answer["q8"] = 5
elif ans['QuestionIdentifier'] == "q9.1" and ans["FreeText"] == "true":
answer["q9"] = 1
elif ans['QuestionIdentifier'] == "q9.2" and ans["FreeText"] == "true":
answer["q9"] = 2
elif ans['QuestionIdentifier'] == "q9.3" and ans["FreeText"] == "true":
answer["q9"] = 3
elif ans['QuestionIdentifier'] == "q9.4" and ans["FreeText"] == "true":
answer["q9"] = 4
elif ans['QuestionIdentifier'] == "q9.5" and ans["FreeText"] == "true":
answer["q9"] = 5
elif ans['QuestionIdentifier'] == "q10.1" and ans["FreeText"] == "true":
answer["q10"] = 1
elif ans['QuestionIdentifier'] == "q10.2" and ans["FreeText"] == "true":
answer["q10"] = 2
elif ans['QuestionIdentifier'] == "q10.3" and ans["FreeText"] == "true":
answer["q10"] = 3
elif ans['QuestionIdentifier'] == "q10.4" and ans["FreeText"] == "true":
answer["q10"] = 4
elif ans['QuestionIdentifier'] == "q10.5" and ans["FreeText"] == "true":
answer["q10"] = 5
elif ans['QuestionIdentifier'] == "q11.1" and ans["FreeText"] == "true":
answer["q11"] = 1
elif ans['QuestionIdentifier'] == "q11.2" and ans["FreeText"] == "true":
answer["q11"] = 2
elif ans['QuestionIdentifier'] == "q11.3" and ans["FreeText"] == "true":
answer["q11"] = 3
elif ans['QuestionIdentifier'] == "q11.4" and ans["FreeText"] == "true":
answer["q11"] = 4
elif ans['QuestionIdentifier'] == "q11.5" and ans["FreeText"] == "true":
answer["q11"] = 5
elif ans['QuestionIdentifier'] == "q12.1" and ans["FreeText"] == "true":
answer["q12"] = 1
elif ans['QuestionIdentifier'] == "q12.2" and ans["FreeText"] == "true":
answer["q12"] = 2
elif ans['QuestionIdentifier'] == "q12.3" and ans["FreeText"] == "true":
answer["q12"] = 3
elif ans['QuestionIdentifier'] == "q12.4" and ans["FreeText"] == "true":
answer["q12"] = 4
elif ans['QuestionIdentifier'] == "q12.5" and ans["FreeText"] == "true":
answer["q12"] = 5
elif ans['QuestionIdentifier'] == "q13.1" and ans["FreeText"] == "true":
answer["q13"] = 1
elif ans['QuestionIdentifier'] == "q13.2" and ans["FreeText"] == "true":
answer["q13"] = 2
elif ans['QuestionIdentifier'] == "q13.3" and ans["FreeText"] == "true":
answer["q13"] = 3
elif ans['QuestionIdentifier'] == "q13.4" and ans["FreeText"] == "true":
answer["q13"] = 4
elif ans['QuestionIdentifier'] == "q13.5" and ans["FreeText"] == "true":
answer["q13"] = 5
elif ans['QuestionIdentifier'] == "q14.1" and ans["FreeText"] == "true":
answer["q14"] = 1
elif ans['QuestionIdentifier'] == "q14.2" and ans["FreeText"] == "true":
answer["q14"] = 2
elif ans['QuestionIdentifier'] == "q14.3" and ans["FreeText"] == "true":
answer["q14"] = 3
elif ans['QuestionIdentifier'] == "q14.4" and ans["FreeText"] == "true":
answer["q14"] = 4
elif ans['QuestionIdentifier'] == "q14.5" and ans["FreeText"] == "true":
answer["q14"] = 5
elif ans['QuestionIdentifier'] == "q15.1" and ans["FreeText"] == "true":
answer["q15"] = 1
elif ans['QuestionIdentifier'] == "q15.2" and ans["FreeText"] == "true":
answer["q15"] = 2
elif ans['QuestionIdentifier'] == "q15.3" and ans["FreeText"] == "true":
answer["q15"] = 3
elif ans['QuestionIdentifier'] == "q15.4" and ans["FreeText"] == "true":
answer["q15"] = 4
elif ans['QuestionIdentifier'] == "q15.5" and ans["FreeText"] == "true":
answer["q15"] = 5
elif ans['QuestionIdentifier'] == "q16.1" and ans["FreeText"] == "true":
answer["q16"] = 1
elif ans['QuestionIdentifier'] == "q16.2" and ans["FreeText"] == "true":
answer["q16"] = 2
elif ans['QuestionIdentifier'] == "q16.3" and ans["FreeText"] == "true":
answer["q16"] = 3
elif ans['QuestionIdentifier'] == "q16.4" and ans["FreeText"] == "true":
answer["q16"] = 4
elif ans['QuestionIdentifier'] == "q16.5" and ans["FreeText"] == "true":
answer["q16"] = 5
return answer
def createCSV(filename):
with open('result/{}-ans.json'.format(filename), 'r') as f:
results = json.load(f)
if not os.path.exists("csv_output/"):
os.makedirs("csv_output/")
with open("csv_output/{}.csv".format(filename), "w", newline='') as output:
f = csv.writer(output)
# Write CSV Header, If you dont need that, remove this line
f.writerow(["surveyId","hitId","AssignmentId",
"workerId","age","gender","education","occupation","Hispanic","race","Political","zipcode","duration",
"feelAboutAd",
"q1","q2","q3","q4","q5","q6","q7","q8","q9","q10","q11","q12","q13","q14","q15","q16",
"image1","image2","image3","image4","image5","image6","image7","image8","image9",
"image10","image11","image12","image13","image14","image15","image16"])
for item in results:
for answer in item["answers"]:
f.writerow([item["id"],item["hit_id"],answer["assignmentId"],
answer["workerId"],answer["age"],answer["gender"],
answer["education"],answer["occupation"],answer["Hispanic"],
answer["race"],answer["Political"],answer["zipCode"],answer["duration"],
answer["feelAboutAd"],
answer["q1"],answer["q2"],answer["q3"],answer["q4"],answer["q5"],answer["q6"],
answer["q7"],answer["q8"],answer["q9"],answer["q10"],answer["q11"],answer["q12"],
answer["q13"],answer["q14"],answer["q15"],answer["q16"],
item["image1"],item["image2"],item["image3"],item["image4"],item["image5"],
item["image6"],item["image7"],item["image8"],item["image9"],item["image10"],
item["image11"],item["image12"],item["image13"],item["image14"],item["image15"],
item["image16"]])
## Block workers
def blockWorkers(workerIDs):
for wid in workerIDs:
response = client.create_worker_block(
WorkerId=wid,
Reason='You already did this HIT.'
)
# resultPath = "result/result-25-11-2019-11-50-33.json" # 90 ass 1-10 complete
# resultPath = "result/result-25-11-2019-17-09-43.json" # 135 ass 10-25
# resultPath = "result/result-25-11-2019-20-37-29.json" # 135 ass 25-40
# resultPath = "result/result-25-11-2019-21-52-30.json" # 135 ass 40-55
howMany = 1
sleepTime = 0#60 * 15
while howMany > 0:
howMany -= 1
clear_output()
listOfPathes = {
"01-10-90" : "result/result-25-11-2019-11-50-33.json", #complete 90
"10-25-135" : "result/result-25-11-2019-17-09-43.json",
"25-40-135" : "result/result-25-11-2019-20-37-29.json",
"40-55-135" : "result/result-25-11-2019-21-52-30.json",
"55-70-135" : "result/result-26-11-2019-00-06-50.json",
"70-90-180" : "result/result-26-11-2019-00-09-48.json"
}
newWorkerIds = []
bigTotalAnswers = 0
completeAns = 0
for key in listOfPathes.keys():
start,end,num = key.split("-")
num = int(num)
print("Result of {}".format(key))
resultPath = listOfPathes[key]
with open(resultPath, 'r') as f:
results = json.load(f)
workerIDs = []
if os.path.exists('workerIDs.json'):
with open('workerIDs.json') as json_file:
workerIDs = json.load(json_file)
numOfAnswers = {}
totalAnswers = 0
for item in results:
# Get the status of the HIT
hit = client.get_hit(HITId=item['hit_id'])
item['status'] = hit['HIT']['HITStatus']
# Get a list of the Assignments that have been submitted by Workers
assignmentsList = client.list_assignments_for_hit(
HITId=item['hit_id'],
AssignmentStatuses=['Submitted', 'Approved'],
MaxResults=10
)
assignments = assignmentsList['Assignments']
item['assignments_submitted_count'] = len(assignments)
answers = []
for assignment in assignments:
# Retreive the attributes for each Assignment
worker_id = assignment['WorkerId']
assignment_id = assignment['AssignmentId']
accept_time = assignment['AcceptTime']
submit_time = assignment['SubmitTime']
deltaTime = submit_time-accept_time
if worker_id not in workerIDs:
workerIDs.append(worker_id)
newWorkerIds.append(worker_id)
if deltaTime.total_seconds() > 60:
# Retrieve the value submitted by the Worker from the XML
answer_dict = xmltodict.parse(assignment['Answer'])
# print(answer_dict)
answer = getAnsewer(answer_dict)
answer['duration'] = deltaTime.total_seconds()
answer['workerId'] = worker_id
answer['assignmentId'] = assignment_id
# print (answer)
answers.append(answer)
# Approve the Assignment (if it hasn't already been approved)
if assignment['AssignmentStatus'] == 'Submitted':
client.approve_assignment(
AssignmentId=assignment_id,
OverrideRejection=False
)
else:
print('Reject assignment= {} with workerid={} and hitid={}'.format(assignment_id,worker_id,item['hit_id']))
client.reject_assignment(
AssignmentId=assignment_id,
RequesterFeedback='You did not finish the assignment properly'
)
numOfAnswers[item['hit_id']] = len(answers)
totalAnswers += len(answers)
# Add the answers that have been retrieved for this item
item['answers'] = answers
with open('workerIDs.json', 'w') as outfile:
json.dump(workerIDs, outfile)
head, tail = os.path.split(resultPath)
filename = tail.split(".")[0]
with open('result/{}-ans.json'.format(filename), 'w') as outfile:
json.dump(results, outfile)
print ("Total Answers = {}/{}".format(totalAnswers,num))
print(json.dumps(numOfAnswers,indent=2))
createCSV(filename)
bigTotalAnswers += totalAnswers
completeAns += num
if totalAnswers == num:
print("{} ********** COMPLETE **********".format(key))
print("We have {}/{} total answers".format(bigTotalAnswers,completeAns))
print ("Blocking {} new workers".format(len(newWorkerIds)))
blockWorkers(newWorkerIds)
all_filenames = [i for i in glob.glob('csv_output/*.csv')]
#combine all files in the list
combined_csv = pd.concat([pd.read_csv(f,dtype = {'zipcode': str}) for f in all_filenames ])
#export to csv
combined_csv.to_csv( "combined_csv.csv", index=False, encoding='utf-8-sig')
sleep(sleepTime)
###Output
Result of 01-10-90
Total Answers = 90/90
{
"3UUIU9GZC6IG2GALB6PMT8DPS4C5TN": 9,
"33QQ60S6ATVW7M39E59KH930UA80UO": 9,
"3VAOOVPI3056LS51UK32O6Z5KHPLLN": 9,
"36818Z1KV4Q895O8RNACJ6PDQJ03AA": 9,
"3P520RYKCIJV2TPABBFWY4ZRBIJ5UE": 9,
"3MGHRFQY2M2DOVNEO669KU9DQYIY0J": 9,
"3KG2UQJ0MK170POZY2RDIK2OZICNQG": 9,
"3D5G8J4N5BHX0AP0HX7ZX67O5SDVTJ": 9,
"37G6BXQPLRY36JBM53OUSX8FLQBQEU": 9,
"3L21G7IH489DVK8WKPA9YZSNVQOY1M": 9
}
01-10-90 ********** COMPLETE **********
Result of 10-25-135
Total Answers = 135/135
{
"356TQKY9XGACR0WGW1TO0WVY6YT78P": 9,
"3Y7LTZE0YUZT979ZIZMCN86MDIXUZJ": 9,
"3QE4DGPGBSOU1SKFD175PXSM3FG4GI": 9,
"34OWYT6U3XU9UPWMKK3ZRHGI4KM9IB": 9,
"39I4RL8QGKU81OFQX7PNAX4NIFT4HH": 9,
"359AP8GAGHXE33MTDD9T2IRHWPEC70": 9,
"3T5ZXGO9DF11HE2I1Q27D79XAXRZQJ": 9,
"3421H3BM9BU5P0GS22OZ3IVIHZB9JN": 9,
"3EAWOID6MUAWK1S9JVJPDV5J9INV0H": 9,
"30U1YOGZGB9ARTCZ1C2FHF5V432SD2": 9,
"3RDTX9JRTZEC55FQ1TZ20SQ9Q4B79E": 9,
"33K3E8REWX866F27EEXMUV8M8R7X89": 9,
"3GITHABACZYQ86MEWR1CM24LKD62NS": 9,
"375VSR8FVXM1TFHIE5R7IXJ26JTZRK": 9,
"3DWGDA5POGHWRQDRFENPR0OTEATV1K": 9
}
10-25-135 ********** COMPLETE **********
Result of 25-40-135
Total Answers = 135/135
{
"3P7QK0GJ3UYKPV0XZFC6HPBUURIZ2A": 9,
"3RWB1RTQDK01X60GSNN50IMJ6S3P8I": 9,
"3QREJ3J434AV1MNJ9KR196MQWIWKLY": 9,
"3TX9T2ZCBAE61BY4DZ1D5WRJYYDWZP": 9,
"39WICJI5AU59ADWG3FQB0ZGM05IZ3L": 9,
"3PZDSVZ3J6U0BK1105K0FH1J4R6N4D": 9,
"391JB9X4ZZLF74549WSG9J1FWNMKM0": 9,
"3D3B8GE89341BBJQXFTJ0EHUQYLP97": 9,
"3SR6AEG6W66OZVTAMEDUQM8UDGWHYJ": 9,
"3J6BHNX0UA5LPMQ4LX7GMYQF1DWKN0": 9,
"3YOAVL4CA1UUS9FK3TVUA48SE5EZ46": 9,
"3K3IX1W4S74FGUTVPL7JW9SEWG3PAM": 9,
"3AC6MFV69LVO4L3FI0FD4THWJA0HZA": 9,
"34YWR3PJ29NOOQX4JAK71G24PT0X0R": 9,
"372AGES0I5ICOKH3DN3MLC3RYJ0RXK": 9
}
25-40-135 ********** COMPLETE **********
Result of 40-55-135
Total Answers = 135/135
{
"3GL25Y68447LR44B8F75ZD9HSW1XM2": 9,
"3K3G488TR3L3A0ITU2VRO3229YOQ58": 9,
"3SCKNODZ0YTXU7JK23I7ITLTVTX7NI": 9,
"3TFJJUELSI27H2PN71SY6RA9L9X2CF": 9,
"3O71U79SRC2DVNICE51I05SPDHZSMP": 9,
"3X4Q1O9UBIZFCSJW3IIKXILYL327OX": 9,
"3MJ9GGZYO4JJONSPNNW70WKLL6CA2L": 9,
"3EAWOID6MUAWK1S9JVJPDV5J9ISV0M": 9,
"3ZICQFRS32VAV639OMLK40L1F0HZZI": 9,
"3YCT0L9OMNMJD53CQ6GIDKHPI79SNP": 9,
"3I01FDIL6NLHMUV17XN9QXSR6992DF": 9,
"3DWGDA5POGHWRQDRFENPR0OTEAYV1P": 9,
"39KMGHJ4R0NGR0RTGYV0S2FTFMG000": 9,
"33IXYHIZB6VME0913SYTWFAC6LA2E8": 9,
"386659BNTMUGYPCBCHZ067Y3KEM013": 9
}
40-55-135 ********** COMPLETE **********
Result of 55-70-135
Total Answers = 135/135
{
"3PKVGQTFIIX1OP7DIJPO4QA00PWYRZ": 9,
"3E9VAUV7BXR9P8LWG83290LQ0L9AY4": 9,
"3566S7OX5EWUG0CVXGK9LM8QSSU176": 9,
"3Q7TKIAPOUNNNGH9H35E5TR0C92DLH": 9,
"3H5TOKO3DAWS4ZV91OYJD6FX91K64P": 9,
"3PGQRAZX03XDP47QX8PTU0FWMRFYSN": 9,
"37SDSEDINAFYGEHE7LCFZM4L3C018T": 9,
"3VDVA3ILIESD9TNWQJO3RZLJRVDG1O": 9,
"39HYCOOPKPY7TYZUHF6T566PCESDMJ": 9,
"33BFF6QPI2O5GIRBSS641VM4J3H3W2": 9,
"3WUVMVA7OCG9UYV1CU5LN7US6FDAZV": 9,
"39XCQ6V3KZHA0IZ9FTHTIYLB9XF65U": 9,
"3TZDZ3Y0JTJYUJ0OCDITZIZWNII19I": 9,
"3HXK2V1N4LSKYE5S9NOHWIVOJ1RG25": 9,
"3HYV4299H19X0FBQJ97U66NFSL48EJ": 9
}
55-70-135 ********** COMPLETE **********
Result of 70-90-180
Total Answers = 180/180
{
"3SA4EMRVJWFJFHVDXJAQ73G1NUUP0Y": 9,
"3RDTX9JRTZEC55FQ1TZ20SQ9Q4I79L": 9,
"3MD8CKRQZ00BT0CEWJOLU3VCNLOJRR": 9,
"37OPIVELUVGN3DV768ZEN0QNAPWHAC": 9,
"3UOMW19E6EJZGZ8APKUW4YDL5JQC5S": 9,
"3I7SHAD35N9RCPYKQ2375EWTQ3PM7Z": 9,
"3RWO3EJELIMJM6GVT2EQL8ZBSM0P11": 9,
"3YD0MU1NC3EQAOPVTZD2WN1TWM07A0": 9,
"3LEG2HW4UG0EKE9XY3IZEXVCIMRF25": 9,
"3YGYP13642M7CQ3ZBHGDNACTW5TRN3": 9,
"338431Z1FMSPUB3BCWGQ2ZCYMFYROI": 9,
"3DGDV62G7PMQBRYRC6E4QR9GKSEP2I": 9,
"3M93N4X8HL0NUFCRB8OQKD089N7JSF": 9,
"3538U0YQ1G735W5G23W4X704O0RF3G": 9,
"3X55NP42EPTFW9UAG6S991E8Q6EP3T": 9,
"3UV0D2KX1NWONSOK2H1N7CSA20NF41": 9,
"3VEI3XUCZSA7FBFCRWT5RZHOWKURPZ": 9,
"3MXX6RQ9EWI0E5DEGKXSJ66E46AP4E": 9,
"3M7OI89LVZ1VZ38OU341W4RL1E1C68": 9,
"31SIZS5W5ASSFNGRR98UR47Y1GGRQQ": 9
}
70-90-180 ********** COMPLETE **********
We have 810/810 total answers
Blocking 0 new workers
|
Chapter 10/feature_store/02 - BlazingText on Amazon Reviews - Classification.ipynb | ###Markdown
Building a text classification model on the Amazon Reviews dataset1. Inspect and process data with pandas and nltk2. Store engineered features in Amazon SageMaker Feature Store (offline and online)3. Build a dataset from the offline feature store with Amazon Athena4. Train and deploy a classification model with Amazon SageMaker and BlazingText5. Predict a few samples6. Clean up 1 - Inspect and process data
###Code
import pandas as pd
import numpy as np
import time
from time import gmtime, strftime
fs_training_output_path = 's3://sagemaker-us-east-1-613904931467/sagemaker-scikit-learn-2021-07-05-07-54-15-145/output/fs_data/fs_data.tsv'
data = pd.read_csv(fs_training_output_path, sep='\t',
error_bad_lines=False, dtype='str')
data.head()
###Output
_____no_output_____
###Markdown
2 - Create Feature Group and load data
###Code
import boto3, sagemaker
from sagemaker.session import Session
sagemaker_session = Session()
region = sagemaker_session.boto_region_name
boto_session = boto3.Session(region_name=region)
role = sagemaker.get_execution_role()
default_bucket = sagemaker_session.default_bucket()
prefix = 'amazon-reviews-featurestore'
sagemaker_client = boto_session.client(service_name='sagemaker', region_name=region)
featurestore_runtime = boto_session.client(service_name='sagemaker-featurestore-runtime', region_name=region)
feature_store_session = Session(
boto_session=boto_session,
sagemaker_client=sagemaker_client,
sagemaker_featurestore_runtime_client=featurestore_runtime
)
###Output
_____no_output_____
###Markdown
Define the feature group name
###Code
from sagemaker.feature_store.feature_group import FeatureGroup
feature_group_name = 'amazon-reviews-feature-group-' + strftime('%d-%H-%M-%S', gmtime())
feature_group = FeatureGroup(name=feature_group_name, sagemaker_session=feature_store_session)
###Output
_____no_output_____
###Markdown
Define the name of the column storing a unique record id (e.g. primary key)
###Code
record_identifier_feature_name = 'review_id'
###Output
_____no_output_____
###Markdown
Add a column to store feature timestamps
###Code
event_time_feature_name = 'event_time'
current_time_sec = int(round(time.time()))
# event_time is the name of the new column. A bit confusing!
data = data.assign(event_time=current_time_sec)
# Make sure that timestamps are correctly set
# NaN timestamps will cause ingestion errors
data[data.isna().any(axis=1)]
data.head()
###Output
_____no_output_____
###Markdown
Infer feature definitions from the pandas dataframe
###Code
data['review_id'] = data['review_id'].astype('str').astype('string')
data['product_id'] = data['product_id'].astype('str').astype('string')
data['review_body'] = data['review_body'].astype('str').astype('string')
data['label'] = data['label'].astype('str').astype('string')
data['star_rating'] = data['star_rating'].astype('int64')
data['event_time'] = data['event_time'].astype('float64')
# We could also use the UNIX date/time format, and we'd set the type to string
feature_group.load_feature_definitions(data_frame=data)
###Output
_____no_output_____
###Markdown
Create the feature group
###Code
feature_group.create(
s3_uri='s3://{}/{}'.format(default_bucket, prefix),
record_identifier_name=record_identifier_feature_name,
event_time_feature_name=event_time_feature_name,
role_arn=role,
enable_online_store=True,
description="1.8M+ tokenized camera reviews from the Amazon Customer Reviews dataset",
tags=[
{ 'Key': 'Dataset', 'Value': 'amazon customer reviews' },
{ 'Key': 'Subset', 'Value': 'cameras' },
{ 'Key': 'Owner', 'Value': 'Julien Simon' }
]
)
from time import sleep
import sys
while feature_group.describe().get("FeatureGroupStatus") != 'Created':
sys.stdout.write('.')
sleep(1)
# boto3 doesn't have waiters for Feature Store
# Please +1 this issue on GitHub https://github.com/boto/boto3/issues/2788
###Output
_____no_output_____
###Markdown
Ingest features into our feature group, directly from the pandas dataframe
###Code
feature_group.ingest(data_frame=data, max_workers=10, wait=True)
###Output
_____no_output_____
###Markdown
3 - Use Amazon Athena to build a training dataset
###Code
feature_group_query = feature_group.athena_query()
feature_group_table = feature_group_query.table_name
print(feature_group_table)
###Output
_____no_output_____
###Markdown
Build and run SQL query
###Code
# Find the most popular cameras and their average rating
query_string = 'SELECT product_id, avg(star_rating), count(*) as review_count \
FROM "'+ feature_group_table+'"' \
+ ' GROUP BY product_id \
ORDER BY review_count desc;'
print(query_string)
# Keep only best selling cameras with at least 1,000 reviews
query_string = 'SELECT * FROM \
(SELECT product_id, avg(star_rating) as avg_rating, count(*) as review_count \
FROM "'+ feature_group_table+'"' + ' \
GROUP BY product_id) \
WHERE review_count > 1000 \
ORDER BY review_count DESC;'
print(query_string)
# Find the corresponding labeled reviews, ready for BlazingText training
query_string = 'SELECT label,review_body FROM "' \
+ feature_group_table+'"' \
+ ' INNER JOIN (SELECT product_id FROM (SELECT product_id, avg(star_rating) as avg_rating, count(*) as review_count \
FROM "' + feature_group_table+'"' \
+ ' GROUP BY product_id) WHERE review_count > 1000) tmp ON "' \
+ feature_group_table+'"'+ '.product_id=tmp.product_id;'
print(query_string)
dataset = pd.DataFrame()
feature_group_query.run(query_string=query_string, output_location='s3://'+default_bucket+'/query_results/')
feature_group_query.wait()
dataset = feature_group_query.as_dataframe()
dataset.head()
dataset.shape
dataset['label'].value_counts()
###Output
_____no_output_____
###Markdown
Split dataset for training and validation, and save it to text files
###Code
from sklearn.model_selection import train_test_split
training, validation = train_test_split(dataset, test_size=0.1)
print(training.shape)
print(validation.shape)
np.savetxt('/tmp/training.txt', training.values, fmt='%s')
np.savetxt('/tmp/validation.txt', validation.values, fmt='%s')
!head -5 /tmp/training.txt
###Output
_____no_output_____
###Markdown
4 - Train a classification model on Amazon SageMaker with the BlazingText algorithm
###Code
prefix = 'blazing-text-amazon-reviews'
s3_train_path = sagemaker_session.upload_data(path='/tmp/training.txt', bucket=default_bucket, key_prefix=prefix+'/input/train')
s3_val_path = sagemaker_session.upload_data(path='/tmp/validation.txt', bucket=default_bucket, key_prefix=prefix+'/input/validation')
s3_output = 's3://{}/{}/output/'.format(default_bucket, prefix)
print(s3_train_path)
print(s3_val_path)
print(s3_output)
from sagemaker import image_uris
container = image_uris.retrieve('blazingtext', region)
print(container)
bt = sagemaker.estimator.Estimator(container,
role,
instance_count=1,
instance_type='ml.p3.2xlarge',
output_path=s3_output)
bt.set_hyperparameters(mode='supervised')
train_data = sagemaker.TrainingInput(
s3_train_path,
content_type='text/plain')
validation_data = sagemaker.TrainingInput(
s3_val_path,
content_type='text/plain')
s3_channels = {'train': train_data, 'validation': validation_data}
bt.fit(inputs=s3_channels)
bt_predictor = bt.deploy(initial_instance_count=1, instance_type='ml.t2.medium')
instances = [' I really love this camera , it takes amazing pictures . ',
' this camera is ok , it gets the job done . Nothing fancy . ',
' Poor quality , the camera stopped working after a couple of days .']
import pprint
payload = {"instances" : instances, "configuration": {"k": 3}}
bt_predictor.serializer = sagemaker.serializers.JSONSerializer()
bt_predictor.deserializer = sagemaker.deserializers.JSONDeserializer()
response = bt_predictor.predict(payload)
pprint.pprint(response)
###Output
_____no_output_____
###Markdown
5 - Clean up
###Code
bt_predictor.delete_endpoint()
feature_group.delete()
# How to remove old feature groups
old_feature_group_name = 'amazon-reviews-feature-group-19-09-49-20'
old_feature_group = FeatureGroup(name=old_feature_group_name, sagemaker_session=feature_store_session)
old_feature_group.delete()
###Output
_____no_output_____ |
Data Scientist's Salary Prediction/Data Scientist's Salary Prediction.ipynb | ###Markdown
**Exploring the dataset**
###Code
# Returns number of rows and columns of the dataset
df.shape
# Returns an object with all of the column headers
df.columns
# Returns different datatypes for each columns (float, int, string, bool, etc.)
df.dtypes
# Returns the first x number of rows when head(x). Without a number it returns 5
df.head()
# Returns the last x number of rows when tail(x). Without a number it returns 5
df.tail()
# Returns basic information on all columns
df.info()
# Returns basic statistics on numeric columns
df.describe().T
# Returns true for a column having null values, else false
df.isnull().any()
###Output
_____no_output_____
###Markdown
**Data Cleaning**
###Code
# Removing the 'Unnamed' column
df.drop(labels='Unnamed: 0', axis='columns', inplace=True)
df.columns
# Removing the rows having '-1' as Salary Estimate value
print("Before: ",df.shape)
df = df[df['Salary Estimate'] != "-1"]
print("After: ", df.shape)
# Removing the text value from 'Salary Estimate' column
salary = df['Salary Estimate'].apply(lambda x: x.split("(")[0])
salary
# Removing '$' and 'K' from 'Salary Estimate' column
salary = salary.apply(lambda x: x.replace("$","").replace("K",""))
salary
# Finding any inconsistencies in the salary
print("Length of Salary: ",len(salary.unique()))
salary.unique()[380:]
# Creating column for 'Per Hour'
df['salary_per_hour'] = salary.apply(lambda x: 1 if "per hour" in x.lower() else 0)
df['salary_per_hour'].value_counts()
# Creating column for 'Employee Provided Salary'
df['emp_provided_salary'] = salary.apply(lambda x: 1 if "employer provided salary" in x.lower() else 0)
df['emp_provided_salary'].value_counts()
# Removing 'Per Hour' and 'Employer Provided Salary' from 'Salary Estimate' column
salary = salary.apply(lambda x: x.lower().replace("per hour", "").replace("employer provided salary:", "").replace(" ",""))
salary.unique()[380:]
# Creating column for min_salary
df["min_salary"] = salary.apply(lambda x: int(x.split("-")[0]))
df["min_salary"].tail()
# Creating column for max_salary
df["max_salary"] = salary.apply(lambda x: int(x.split("-")[1]))
df["max_salary"].tail()
# Creating column for average_salary
df["average_salary"] = (df["min_salary"]+df["max_salary"])/2
# Converting the hourly salaries to annual salaries
df['min_salary'] = df.apply(lambda x: x['min_salary']*2 if x['salary_per_hour'] == 1 else x['min_salary'], axis=1)
df['max_salary'] = df.apply(lambda x: x['max_salary']*2 if x['salary_per_hour'] == 1 else x['max_salary'], axis=1)
df[df['salary_per_hour'] == 1][['salary_per_hour','min_salary','max_salary']]
# Removing numbers from 'Company Name' column
df["Company Name"] = df['Company Name'].apply(lambda x: x.split("\n")[0])
df["Company Name"].head(10)
# Creating a column 'job_state'
df["job_state"] = df["Location"].apply(lambda x: x.split(',')[1])
df["job_state"].head()
df['job_state'].unique()
# Fixing Los Angeles to CA
df['job_state'] = df['job_state'].apply(lambda x: x.strip() if x.strip().lower() != 'los angeles' else 'CA')
df['job_state'].value_counts()[:5]
df['job_state'].unique()
# Calculating age of the companies
df["company_age"] = df['Founded'].apply(lambda x: x if x<1 else 2020-x)
df["company_age"].head()
# Cleaning the 'Job Description' column
df["python_job"] = df['Job Description'].apply(lambda x: 1 if 'python' in x.lower() else 0)
df["r_job"] = df['Job Description'].apply(lambda x: 1 if 'r studio' in x.lower() else 0)
df["spark_job"] = df['Job Description'].apply(lambda x: 1 if 'spark' in x.lower() else 0)
df["aws_job"] = df['Job Description'].apply(lambda x: 1 if 'aws' in x.lower() else 0)
df["excel_job"] = df['Job Description'].apply(lambda x: 1 if 'excel' in x.lower() else 0)
# Python Jobs
df.python_job.value_counts()
# R Studio Jobs
df.r_job.value_counts()
# Spark Jobs
df.spark_job.value_counts()
# AWS Jobs
df.aws_job.value_counts()
# Excel Jobs
df.excel_job.value_counts()
# Dataset till now
df.head()
# Cleaning the 'Job Title' column
def title_simplifier(title):
if 'data scientist' in title.lower():
return 'data scientist'
elif 'data engineer' in title.lower():
return 'data engineer'
elif 'analyst' in title.lower():
return 'analyst'
elif 'machine learning' in title.lower():
return 'mle'
elif 'manager' in title.lower():
return 'manager'
elif 'director' in title.lower():
return 'director'
else:
return 'na'
df['job_title_simplified'] = df['Job Title'].apply(title_simplifier)
df['job_title_simplified'].value_counts()
def seniority(title):
if 'sr' in title.lower() or 'senior' in title.lower() or 'sr' in title.lower() or 'lead' in title.lower() or 'principal' in title.lower():
return 'senior'
elif 'jr' in title.lower() or 'jr.' in title.lower():
return 'jr'
else:
return 'na'
df['job_seniority'] = df['Job Title'].apply(seniority)
df['job_seniority'].value_counts()
# Cleaning 'Competitors' column
df['Competitors'] = df['Competitors'].apply(lambda x: len(x.split(',')) if x != '-1' else 0)
df['Competitors']
# Cleaning 'Type of Ownership' column
df['Type of ownership'].value_counts()
def ownership_simplifier(text):
if 'private' in text.lower():
return 'Private'
elif 'public' in text.lower():
return 'Public'
elif ('-1' in text.lower()) or ('unknown' in text.lower()):
return 'Other Organization'
else:
return text
df['Type of ownership'] = df['Type of ownership'].apply(ownership_simplifier)
df['Type of ownership'].value_counts()
# Cleaning 'Revenue' column
df['Revenue'].value_counts()
def revenue_simplifier(text):
if '-1' in text.lower():
return 'Unknown / Non-Applicable'
else:
return text
df['Revenue'] = df['Revenue'].apply(revenue_simplifier)
df['Revenue'].value_counts()
df['Size'].value_counts()
# Cleaning 'Size' column
def size_simplifier(text):
if '-1' in text.lower():
return 'Unknown'
else:
return text
df['Size'] = df['Size'].apply(size_simplifier)
df['Size'].value_counts()
# Dataset till now
df.head()
###Output
_____no_output_____
###Markdown
**Exploratory Data Analysis**
###Code
# Importing essential libraries
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
df.describe().T
df['Rating'].hist()
plt.xlabel('Ratings')
plt.ylabel('Count')
plt.title("Company Ratings Histogram")
df['company_age'].hist()
plt.xlabel('Time (in Years)')
plt.ylabel('Count')
plt.title("Companies Age Histogram")
df['average_salary'].hist()
plt.xlabel('Annual Salary (in $)')
plt.ylabel('Count')
plt.title("Average Salary Histogram")
sns.boxplot(y='average_salary', data=df, orient='v', palette='Set1')
sns.boxplot(y='company_age', data=df, orient='v', palette='Set1')
sns.boxplot(y='Rating', data=df, orient='v', palette='Set1')
# Finding Correlation between columns
df[['company_age','average_salary','Rating']].corr()
# Plotting the correlation
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(df[['company_age','average_salary','Rating']].corr(), vmax=.3, center=0, cmap=cmap, square=True, linewidths=.5, cbar_kws={"shrink": .5})
# Exploring categorical data
df.columns
df_categorical = df[['Company Name', 'Location', 'Headquarters', 'Size', 'Type of ownership', 'Industry', 'Sector', 'Revenue', 'job_title_simplified', 'job_seniority']]
# Plotting the data for 'Location' and 'Headquarters' columns
for i in ['Location', 'Headquarters']:
unique_categories = df_categorical[i].value_counts()[:20]
print("Graph for {}\nTotal records = {}".format(i, len(unique_categories)))
chart = sns.barplot(x=unique_categories.index, y=unique_categories)
chart.set_xticklabels(chart.get_xticklabels(), rotation=90)
plt.show()
# Plotting the data for 'Company Name', 'Size', 'Type of ownership', 'Revenue' columns
for i in ['Company Name', 'Size', 'Type of ownership', 'Revenue']:
unique_categories = df_categorical[i].value_counts()[:20]
print("Graph for {}\nTotal records = {}".format(i, len(unique_categories)))
chart = sns.barplot(x=unique_categories.index, y=unique_categories)
chart.set_xticklabels(chart.get_xticklabels(), rotation=90)
plt.show()
# Plotting the data for 'Industry', 'Sector' columns
for i in ['Industry', 'Sector']:
unique_categories = df_categorical[i].value_counts()[:20]
print("Graph for {}\nTotal records = {}".format(i, len(unique_categories)))
chart = sns.barplot(x=unique_categories.index, y=unique_categories)
chart.set_xticklabels(chart.get_xticklabels(), rotation=90)
plt.show()
# Plotting the data for 'job_title_simplified', 'job_seniority' columns
for i in ['job_title_simplified', 'job_seniority']:
unique_categories = df_categorical[i].value_counts()[:20]
print("Graph for {}\nTotal records = {}".format(i, len(unique_categories)))
chart = sns.barplot(x=unique_categories.index, y=unique_categories)
chart.set_xticklabels(chart.get_xticklabels(), rotation=90)
plt.show()
df.columns
pd.pivot_table(df, index=['job_title_simplified','job_seniority'], values='average_salary')
pd.pivot_table(df, index=['job_state','job_title_simplified'], values='average_salary').sort_values('average_salary', ascending=False)[:20]
pd.pivot_table(df, index='job_state', values='average_salary').sort_values('average_salary', ascending=False)[:15]
# Top 15 Industries for Data Scientists
pd.pivot_table(df, index='Industry', values='average_salary').sort_values('average_salary', ascending=False)[:15]
# Top 10 Sectors for Data Scientists
pd.pivot_table(df, index='Sector', values='average_salary').sort_values('average_salary', ascending=False)[:10]
# Top Company types that pay Data Scientists well
pd.pivot_table(df, index='Type of ownership', values='average_salary').sort_values('average_salary', ascending=False)[:10]
# Top 20 Companies that pay Data Scientists well
pd.pivot_table(df, index='Company Name', values='average_salary').sort_values('average_salary', ascending=False)[:20]
###Output
_____no_output_____
###Markdown
**Feature Engineering** *Trimming Columns*
###Code
# Trimming the 'Industry' column
# Taking top 11 Industries and replacing others by 'Others'
industry_list = ['Biotech & Pharmaceuticals', 'Insurance Carriers', 'Computer Hardware & Software', 'IT Services', 'Health Care Services & Hospitals',
'Enterprise Software & Network Solutions', 'Consulting', 'Internet', 'Advertising & Marketing', 'Aerospace & Defense', 'Consumer Products Manufacturing']
def industry_simplifier(text):
if text not in industry_list:
return 'Others'
else:
return text
df['Industry'] = df['Industry'].apply(industry_simplifier)
# Trimming the 'job_state' column
# Taking top 10 States and replacing others by 'Others'
job_state_list = ['CA', 'MA', 'NY', 'VA', 'IL', 'MD', 'PA', 'TX', 'NC', 'WA']
def job_state_simplifier(text):
if text not in job_state_list:
return 'Others'
else:
return text
df['job_state'] = df['job_state'].apply(job_state_simplifier)
# Adding column of 'job_in_headquarters'
df['job_in_headquarters'] = df.apply(lambda x: 1 if x['Location'] == x['Headquarters'] else 0, axis=1)
df.columns
# Choosing relevant columns
df_model = df.copy(deep=True)
df_model = df_model[['average_salary', 'Rating', 'company_age', 'Size', 'Type of ownership', 'Industry', 'Revenue', 'Competitors',
'job_title_simplified', 'job_seniority', 'job_state', 'job_in_headquarters', 'python_job', 'spark_job', 'aws_job', 'excel_job', ]]
# Renaming columns
df_model.rename(columns={'Rating':'company_rating', 'Size':'company_size', 'Type of ownership':'type_of_ownership',
'Industry':'industry', 'Revenue':'revenue', 'Competitors':'competitors'}, inplace=True)
df_model.columns
###Output
_____no_output_____
###Markdown
*Handling Ordinal Categorical Features*
###Code
# Mapping ranks to 'company_size' columns since it is ordinal categorical feature
size_map = {'Unknown': 0, '1 to 50 employees': 1, '51 to 200 employees': 2, '201 to 500 employees': 3,
'501 to 1000 employees': 4, '1001 to 5000 employees': 5, '5001 to 10000 employees': 6, '10000+ employees': 7}
df_model['company_size_rank'] = df_model['company_size'].map(size_map)
df_model.drop('company_size', axis=True, inplace=True)
# Mapping ranks to 'revenue ' columns since it is ordinal categorical feature
revenue_map = {'Unknown / Non-Applicable': 0, 'Less than $1 million (USD)': 1, '$1 to $5 million (USD)': 2, '$5 to $10 million (USD)': 3,
'$10 to $25 million (USD)': 4, '$25 to $50 million (USD)': 5, '$50 to $100 million (USD)': 6, '$100 to $500 million (USD)': 7,
'$500 million to $1 billion (USD)': 8, '$1 to $2 billion (USD)': 9, '$2 to $5 billion (USD)':10, '$5 to $10 billion (USD)':11, '$10+ billion (USD)':12}
df_model['company_revenue_rank'] = df_model['revenue'].map(revenue_map)
df_model.drop('revenue', axis=True, inplace=True)
# Mapping ranks to 'job_seniority ' columns since it is ordinal categorical feature
job_seniority_map = {'na': 0, 'jr': 1, 'senior': 2}
df_model['job_seniority_rank'] = df_model['job_seniority'].map(job_seniority_map)
df_model.drop('job_seniority', axis=True, inplace=True)
###Output
_____no_output_____
###Markdown
*Handling Nominal Categorical Features*
###Code
# Removing 'type_of_ownership' column using get_dummies()
df_model = pd.get_dummies(columns=['type_of_ownership'], data=df_model)
df_model.shape
# Removing 'industry' column using get_dummies()
df_model = pd.get_dummies(columns=['industry'], data=df_model)
df_model.shape
# Removing 'job_title_simplified' column using get_dummies()
df_model = pd.get_dummies(columns=['job_title_simplified'], data=df_model)
df_model.shape
# Removing 'job_state' column using get_dummies()
df_model = pd.get_dummies(columns=['job_state'], data=df_model)
df_model.shape
###Output
_____no_output_____
###Markdown
*Featuring Scaling*
###Code
df_model.head()
# Dataset after Feature Engineering
df_model.shape
X = df_model.drop('average_salary', axis=1)
y = df_model['average_salary']
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
columns_to_scale = ['company_rating', 'competitors', 'company_age', 'company_size_rank', 'company_revenue_rank']
X[columns_to_scale] = scaler.fit_transform(X[columns_to_scale])
# Splitting the dataset into train and test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
print("Training set size: {} and Testing set size: {}".format(X_train.shape, X_test.shape))
###Output
Training set size: (593, 50) and Testing set size: (149, 50)
###Markdown
**Model Building** *Linear Regression*
###Code
# Creating linear regression model
from sklearn.linear_model import LinearRegression
lr_model = LinearRegression()
# Fitting the dataset to the model
lr_model.fit(X_train, y_train)
print("Accuracy of the Linear Regression Model on Training set is : {}% and on Test set is {}%".format(round(lr_model.score(X_train, y_train),4)*100, round(lr_model.score(X_test, y_test),4)*100))
###Output
Accuracy of the Linear Regression Model on Training set is : 57.879999999999995% and on Test set is 60.68%
###Markdown
*Decision Tree Regression*
###Code
# Creating decision tree regression model
from sklearn.tree import DecisionTreeRegressor
decision_model = DecisionTreeRegressor(criterion='mse', max_depth=11, random_state=42)
# Fitting the dataset to the model
decision_model.fit(X_train, y_train)
print("Accuracy of the Decision Tree Regression Model on Training set is : {}% and on Test set is {}%".format(round(decision_model.score(X_train, y_train),4)*100, round(decision_model.score(X_test, y_test),4)*100))
###Output
Accuracy of the Decision Tree Regression Model on Training set is : 93.17% and on Test set is 75.57000000000001%
###Markdown
*Random Forest Regression*
###Code
# Creating random forest regression model
from sklearn.ensemble import RandomForestRegressor
forest_model = RandomForestRegressor(n_estimators=100, criterion='mse', random_state=42)
# Fitting the dataset to the model
forest_model.fit(X_train, y_train)
print("Accuracy of the Random Forest Regression Model on Training set is : {}% and on Test set is {}%".format(round(forest_model.score(X_train, y_train),4)*100, round(forest_model.score(X_test, y_test),4)*100))
###Output
Accuracy of the Random Forest Regression Model on Training set is : 95.25% and on Test set is 76.59%
###Markdown
*AdaBoost Regression Model*
###Code
# Creating AdaBoost regression model
from sklearn.ensemble import AdaBoostRegressor
adb_model = AdaBoostRegressor(base_estimator=decision_model, n_estimators=250, learning_rate=1, random_state=42)
# Fitting the dataset to the model
adb_model.fit(X_train, y_train)
print("Accuracy of the AdaBoost Regression Model on Training set is : {}% and on Test set is {}%".format(round(adb_model.score(X_train, y_train),4)*100, round(adb_model.score(X_test, y_test),4)*100))
###Output
Accuracy of the AdaBoost Regression Model on Training set is : 96.58% and on Test set is 78.62%
|
Intro_Debugging.ipynb | ###Markdown
Introduction to DebuggingIn this book, we want to explore _debugging_ - the art and science of fixing bugs in computer software. In particular, we want to explore techniques that _automatically_ answer questions like: Where is the bug? When does it occur? And how can we repair it? But before we start automating the debugging process, we first need to understand what this process is.In this chapter, we introduce basic concepts of systematic software debugging and the debugging process, and at the same time get acquainted with Python and interactive notebooks.
###Code
from bookutils import YouTubeVideo, quiz
YouTubeVideo("bCHRCehDOq0")
###Output
_____no_output_____
###Markdown
**Prerequisites*** The book is meant to be a standalone reference; however, a number of _great books on debugging_ are listed at the end,* Knowing a bit of _Python_ is helpful for understanding the code examples in the book. SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.Intro_Debugging import ```and then make use of the following features.In this chapter, we introduce some basics of how failures come to be as well as a general process for debugging. A Simple Function Your Task: Remove HTML MarkupLet us start with a simple example. You may have heard of how documents on the Web are made out of text and HTML markup. HTML markup consists of _tags_ in angle brackets that surround the text, providing additional information on how the text should be interpreted. For instance, in the HTML text```htmlThis is emphasized.```the word "emphasized" is enclosed in the HTML tags `` (start) and `` (end), meaning that it should be interpreted (and rendered) in an emphasized way – typically in italics. In your environment, the HTML text gets rendered as> This is emphasized.There's HTML tags for pretty much everything – text markup (bold text, strikethrough text), text structure (titles, lists), references (links) to other documents, and many more. These HTML tags shape the Web as we know it. However, within all the HTML markup, it may become difficult to actually access the _text_ that lies within. We'd like to implement a simple function that removes _HTML markup_ and converts it into text. If our input is```htmlHere's some strong argument.```the output should be> Here's some strong argument. Here's a Python function which does exactly this. It takes a (HTML) string and returns the text without markup.
###Code
def remove_html_markup(s):
tag = False
out = ""
for c in s:
if c == '<': # start of markup
tag = True
elif c == '>': # end of markup
tag = False
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
This function works, but not always. Before we start debugging things, let us first explore its code and how it works. Understanding Python ProgramsIf you're new to Python, you might first have to understand what the above code does. We very much recommend the [Python tutorial](https://docs.python.org/3/tutorial/) to get an idea on how Python works. The most important things for you to understand the above code are these three:1. Python structures programs through _indentation_, so the function and `for` bodies are defined by being indented;2. Python is _dynamically typed_, meaning that the type of variables like `c`, `tag`, or `out` is determined at run-time.3. Most of Python's syntactic features are inspired by other common languages, such as control structures (`while`, `if`, `for`), assignments (`=`), or comparisons (`==`, `!=`, `<`).With that, you can already understand what the above code does: `remove_html_markup()` takes a (HTML) string `s` and then iterates over the individual characters (`for c in s`). By default, these characters are added to the return string `out`. However, if `remove_html_markup()` finds a `` character is found.In contrast to other languages, Python makes no difference between strings and characters – there's only strings. As in HTML, strings can be enclosed in single quotes (`'a'`) and in double quotes (`"a"`). This is useful if you want to specify a string that contains quotes, as in `'She said "hello", and then left'` or `"The first character is a 'c'"` Running a FunctionTo find out whether `remove_html_markup()` works correctly, we can *test* it with a few values. For the string```htmlHere's some strong argument.```for instance, it produces the correct value:
###Code
remove_html_markup("Here's some <strong>strong argument</strong>.")
###Output
_____no_output_____
###Markdown
Interacting with NotebooksIf you are reading this in the interactive notebook, you can try out `remove_html_markup()` with other values as well. Click on the above cells with the invocation of `remove_html_markup()` and change the value – say, to `remove_html_markup("foo")`. Press Shift+Enter (or click on the play symbol) to execute it and see the result. If you get an error message, go to the above cell with the definition of `remove_html_markup()` and execute this first. You can also run _all_ cells at once; see the Notebook menu for details. (You can actually also change the text by clicking on it, and corect mistaks such as in this sentence.) Executing a single cell does not execute other cells, so if your cell builds on a definition in another cell that you have not executed yet, you will get an error. You can select `Run all cells above` from the menu to ensure all definitions are set. Also keep in mind that, unless overwritten, all definitions are kept across executions. Occasionally, it thus helps to _restart the kernel_ (i.e. start the Python interpreter from scratch) to get rid of older, superfluous definitions. Testing a Function Since one can change not only invocations, but also definitions, we want to ensure that our function works properly now and in the future. To this end, we introduce tests through _assertions_ – a statement that fails if the given _check_ is false. The following assertion, for instance, checks that the above call to `remove_html_markup()` returns the correct value:
###Code
assert remove_html_markup("Here's some <strong>strong argument</strong>.") == \
"Here's some strong argument."
###Output
_____no_output_____
###Markdown
If you change the code of `remove_html_markup()` such that the above assertion fails, you will have introduced a bug. Oops! A Bug! As nice and simple as `remove_html_markup()` is, it is buggy. Some HTML markup is not properly stripped away. Consider this HTML tag, which would render as an input field in a form:```html">```If we feed this string into `remove_html_markup()`, we would expect an empty string as the result. Instead, this is what we get:
###Code
remove_html_markup('<input type="text" value="<your name>">')
###Output
_____no_output_____
###Markdown
Every time we encounter a bug, this means that our earlier tests have failed. We thus need to introduce another test that documents not only how the bug came to be, but also the result we actually expected. The assertion we write now fails with an error message. (The `ExpectError` magic ensures we see the error message, but the rest of the notebook is still executed.)
###Code
from ExpectError import ExpectError
with ExpectError():
assert remove_html_markup('<input type="text" value="<your name>">') == ""
###Output
Traceback (most recent call last):
File "<ipython-input-7-c7b482ebf524>", line 2, in <module>
assert remove_html_markup('<input type="text" value="<your name>">') == ""
AssertionError (expected)
###Markdown
With this, we now have our task: _Fix the failure as above._ Visualizing CodeTo properly understand what is going on here, it helps drawing a diagram on how `remove_html_markup()` works. Technically, `remove_html_markup()` implements a _state machine_ with two states `tag` and `¬ tag`. We change between these states depending on the characters we process. This is visualized in the following diagram:
###Code
from graphviz import Digraph, nohtml
from IPython.display import display
PASS = "✔"
FAIL = "✘"
PASS_COLOR = 'darkgreen' # '#006400' # darkgreen
FAIL_COLOR = 'red4' # '#8B0000' # darkred
STEP_COLOR = 'peachpuff'
FONT_NAME = 'Raleway'
def graph(comment="default"):
return Digraph(name='', comment=comment, graph_attr={'rankdir': 'LR'},
node_attr={'style': 'filled',
'fillcolor': STEP_COLOR,
'fontname': FONT_NAME},
edge_attr={'fontname': FONT_NAME})
state_machine = graph()
state_machine.node('Start', )
state_machine.edge('Start', '¬ tag')
state_machine.edge('¬ tag', '¬ tag', label=" ¬ '<'\nadd character")
state_machine.edge('¬ tag', 'tag', label="'<'")
state_machine.edge('tag', '¬ tag', label="'>'")
state_machine.edge('tag', 'tag', label="¬ '>'")
# ignore
display(state_machine)
###Output
_____no_output_____
###Markdown
You see that we start in the non-tag state (`¬ tag`). Here, for every character that is not `''` character. A First FixLet us now look at the above state machine, and process through our input:```html">``` So what you can see is: We are interpreting the `'>'` of `""` as the closing of the tag. However, this is a quoted string, so the `'>'` should be interpreted as a regular character, not as markup. This is an example of _missing functionality:_ We do not handle quoted characters correctly. We haven't claimed yet to take care of all functionality, so we still need to extend our code. So we extend the whole thing. We set up a special "quote" state which processes quoted inputs in tags until the end of the quoted string is reached. This is how the state machine looks like:
###Code
state_machine = graph()
state_machine.node('Start')
state_machine.edge('Start', '¬ quote\n¬ tag')
state_machine.edge('¬ quote\n¬ tag', '¬ quote\n¬ tag',
label="¬ '<'\nadd character")
state_machine.edge('¬ quote\n¬ tag', '¬ quote\ntag', label="'<'")
state_machine.edge('¬ quote\ntag', 'quote\ntag', label="'\"'")
state_machine.edge('¬ quote\ntag', '¬ quote\ntag', label="¬ '\"' ∧ ¬ '>'")
state_machine.edge('quote\ntag', 'quote\ntag', label="¬ '\"'")
state_machine.edge('quote\ntag', '¬ quote\ntag', label="'\"'")
state_machine.edge('¬ quote\ntag', '¬ quote\n¬ tag', label="'>'")
display(state_machine)
###Output
_____no_output_____
###Markdown
This is a bit more complex already. Proceeding from left to right, we first have the state `¬ quote ∧ ¬ tag`, which is our "standard" state for text. If we encounter a `'<'`, we again switch to the "tagged" state `¬ quote ∧ tag`. In this state, however (and only in this state), if we encounter a quotation mark, we switch to the "quotation" state `quote ∧ tag`, in which we remain until we see another quotation mark indicating the end of the string – and then continue in the "tagged" state `¬ quote ∧ tag` until we see the end of the string. Things get even more complicated as HTML allows both single and double quotation characters. Here's a revised implementation of `remove_html_markup()` that takes the above states into account:
###Code
def remove_html_markup(s):
tag = False
quote = False
out = ""
for c in s:
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif c == '"' or c == "'" and tag:
quote = not quote
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
Now, our previous input works well:
###Code
remove_html_markup('<input type="text" value="<your name>">')
###Output
_____no_output_____
###Markdown
and our earlier tests also pass:
###Code
assert remove_html_markup("Here's some <strong>strong argument</strong>.") == \
"Here's some strong argument."
assert remove_html_markup('<input type="text" value="<your name>">') == ""
###Output
_____no_output_____
###Markdown
However, the above code still has a bug. In two of these inputs, HTML markup is still not properly stripped:```htmlfoo"foo""foo"foo```Can you guess which ones these are? Again, a simple assertion will reveal the culprits:
###Code
with ExpectError():
assert remove_html_markup('<b>foo</b>') == 'foo'
remove_html_markup('<b>"foo"</b>')
with ExpectError():
assert remove_html_markup('<b>"foo"</b>') == '"foo"'
remove_html_markup('"<b>foo</b>"')
with ExpectError():
assert remove_html_markup('"<b>foo</b>"') == '"foo"'
with ExpectError():
assert remove_html_markup('<"b">foo</"b">') == 'foo'
###Output
_____no_output_____
###Markdown
So, unfortunately, we're not done yet – our function still has errors. The Devil's Guide to DebuggingLet us now discuss a couple of methods that do _not_ work well for debugging. (These "devil's suggestions" are adapted from the 1993 book "Code Complete" from Steve McConnell.) Printf DebuggingWhen I was a student, never got any formal training in debugging, so I had to figure this out for myself. What I learned was how to use _debugging output_; in Python, this would be the `print()` function. For instance, I would go and scatter `print()` calls everywhere:
###Code
def remove_html_markup_with_print(s):
tag = False
quote = False
out = ""
for c in s:
print("c =", repr(c), "tag =", tag, "quote =", quote)
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif c == '"' or c == "'" and tag:
quote = not quote
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
This way of inspecting executions is commonly called "Printf debugging", after the C `printf()` function. Then, running this would allow me to see what's going on in my code:
###Code
remove_html_markup_with_print('<b>"foo"</b>')
###Output
c = '<' tag = False quote = False
c = 'b' tag = True quote = False
c = '>' tag = True quote = False
c = '"' tag = False quote = False
c = 'f' tag = False quote = True
c = 'o' tag = False quote = True
c = 'o' tag = False quote = True
c = '"' tag = False quote = True
c = '<' tag = False quote = False
c = '/' tag = True quote = False
c = 'b' tag = True quote = False
c = '>' tag = True quote = False
###Markdown
Yes, one sees what is going on – but this is horribly inefficient! Think of a 1,000-character input – you'd have to go through 2,000 lines of logs. It may help you, but it's a total time waster. Plus, you have to enter these statements, remove them again... it's a maintenance nightmare. (You may even forget printf's in your code, creating a security problem: Mac OS X versions 10.7 to 10.7.3 would log the password in clear because someone had forgotten to turn off debugging output.) Debugging into Existence I would also try to _debug the program into existence._ Just change things until they work. Let me see: If I remove the conditions "and not quote" from the program, it would actually work again:
###Code
def remove_html_markup_without_quotes(s):
tag = False
quote = False
out = ""
for c in s:
if c == '<': # and not quote:
tag = True
elif c == '>': # and not quote:
tag = False
elif c == '"' or c == "'" and tag:
quote = not quote
elif not tag:
out = out + c
return out
assert remove_html_markup_without_quotes('<"b">foo</"b">') == 'foo'
###Output
_____no_output_____
###Markdown
Cool! Unfortunately, the function still fails on the other input:
###Code
with ExpectError():
assert remove_html_markup_without_quotes('<b>"foo"</b>') == '"foo"'
###Output
Traceback (most recent call last):
File "<ipython-input-30-1d8954a52bcf>", line 2, in <module>
assert remove_html_markup_without_quotes('<b>"foo"</b>') == '"foo"'
AssertionError (expected)
###Markdown
So, maybe we can change things again, such that both work? And maybe the other tests we had earlier won't fail? Let's just continue to change things randomly again and again and again. Oh, and of course, I would never back up earlier versions such that I would be able to keep track of what has changed and when. Use the Most Obvious Fix My favorite: Use the most obvious fix. This means that you're fixing the symptom, not the problem. In our case, this would be something like:
###Code
def remove_html_markup_fixed(s):
if s == '<b>"foo"</b>':
return '"foo"'
...
###Output
_____no_output_____
###Markdown
Miracle! Our earlier failing assertion now works! Now we can do the same for the other failing test, too, and we're done.(Rumor has it that some programmers use this technique to get their tests to pass...) Things to do InsteadAs with any devil's guide, you get an idea of how to do things by doing the _opposite._ What this means is:1. Understand the code2. Fix the problem, not the symptom3. Proceed systematicallywhich is what we will apply for the rest of this chapter. From Defect to FailureTo understand how to systematically debug a program, we first have to understand how failures come to be. The typical debugging situation looks like this. We have a program (execution), taking some input and producing some output. The output is in *error* (✘), meaning an unwanted and unintended deviation from what is correct, right, or true.The input, in contrast, is assumed to be correct (✔). (Otherwise, we wouldn't search for the bug in our program, but in whatever produced its input.)
###Code
# ignore
def execution_diagram(show_steps=True, variables=[],
steps=3, error_step=666,
until=666, fault_path=[]):
dot = graph()
dot.node('input', shape='none', fillcolor='white', label=f"Input {PASS}",
fontcolor=PASS_COLOR)
last_outgoing_states = ['input']
for step in range(1, min(steps + 1, until)):
if step == error_step:
step_label = f'Step {step} {FAIL}'
step_color = FAIL_COLOR
else:
step_label = f'Step {step}'
step_color = None
if step >= error_step:
state_label = f'State {step} {FAIL}'
state_color = FAIL_COLOR
else:
state_label = f'State {step} {PASS}'
state_color = PASS_COLOR
state_name = f's{step}'
outgoing_states = []
incoming_states = []
if not variables:
dot.node(name=state_name, shape='box',
label=state_label, color=state_color,
fontcolor=state_color)
else:
var_labels = []
for v in variables:
vpath = f's{step}:{v}'
if vpath in fault_path:
var_label = f'<{v}>{v} ✘'
outgoing_states.append(vpath)
incoming_states.append(vpath)
else:
var_label = f'<{v}>{v}'
var_labels.append(var_label)
record_string = " | ".join(var_labels)
dot.node(name=state_name, shape='record',
label=nohtml(record_string), color=state_color,
fontcolor=state_color)
if not outgoing_states:
outgoing_states = [state_name]
if not incoming_states:
incoming_states = [state_name]
for outgoing_state in last_outgoing_states:
for incoming_state in incoming_states:
if show_steps:
dot.edge(outgoing_state, incoming_state,
label=step_label, fontcolor=step_color)
else:
dot.edge(outgoing_state, incoming_state)
last_outgoing_states = outgoing_states
if until > steps + 1:
# Show output
if error_step > steps:
dot.node('output', shape='none', fillcolor='white',
label=f"Output {PASS}", fontcolor=PASS_COLOR)
else:
dot.node('output', shape='none', fillcolor='white',
label=f"Output {FAIL}", fontcolor=FAIL_COLOR)
for outgoing_state in last_outgoing_states:
label = "Execution" if steps == 0 else None
dot.edge(outgoing_state, 'output', label=label)
display(dot)
# ignore
execution_diagram(show_steps=False, steps=0, error_step=0)
###Output
_____no_output_____
###Markdown
This situation we see above is what we call a *failure*: An externally visible _error_ in the program behavior, with the error again being an unwanted and unintended deviation from what is correct, right, or true. How does this failure come to be? The execution we see above breaks down into several program _states_, one after the other.
###Code
# ignore
for until in range(1, 6):
execution_diagram(show_steps=False, until=until, error_step=2)
###Output
_____no_output_____
###Markdown
Initially, the program state is still correct (✔). However, at some point in the execution, the state gets an _error_, also known as a *fault*. This fault – again an unwanted and unintended deviation from what is correct, right, or true – then propagates along the execution, until it becomes externally visible as a _failure_.(In reality, there are many, many more states than just this, but these would not fit in a diagram.) How does a fault come to be? Each of these program states is produced by a _step_ in the program code. These steps take a state as input and produce another state as output. Technically speaking, the program inputs and outputs are also parts of the program state, so the input flows into the first step, and the output is the state produced by the last step.
###Code
# ignore
for until in range(1, 6):
execution_diagram(show_steps=True, until=until, error_step=2)
###Output
_____no_output_____
###Markdown
Now, in the diagram above, Step 2 gets a _correct_ state as input and produces a _faulty_ state as output. The produced fault then propagates across more steps to finally become visible as a _failure_. The goal of debugging thus is to _search_ for the step in which the state first becomes faulty. The _code_ associated with this step is again an error – an unwanted and unintended deviation from what is correct, right, or true – and is called a _defect_. This is what we have to find – and to fix. Sounds easy, right? Unfortunately, things are not that easy, and that has something to do with the program state. Let us assume our state consists of three variables, `v1` to `v3`, and that Step 2 produces a fault in `v2`. This fault then propagates to the output:
###Code
# ignore
for until in range(1, 6):
execution_diagram(show_steps=True, variables=['v1', 'v2', 'v3'],
error_step=2,
until=until, fault_path=['s2:v2', 's3:v2'])
###Output
_____no_output_____
###Markdown
The way these faults propagate is called a *cause-effect chain*:* The _defect_ in the code _causes_ a fault in the state when executed.* This _fault_ in the state then _propagates_ through further execution steps...* ... until it becomes visible as a _failure_. Since the code was originally written by a human, any defect can be related to some original _mistake_ the programmer made. This gives us a number of terms that all are more precise than the general "error" or the colloquial "bug":* A _mistake_ is a human act or decision resulting in an error.* A _defect_ is an error in the program code. Also called *bug*.* A _fault_ is an error in the program state. Also called *infection*.* A _failure_ is an externally visible error in the program behavior. Also called *malfunction*.The cause-effect chain of events is thus* Mistake → Defect → Fault → ... → Fault → FailureNote that not every defect also causes a failure, which is despite all testing, there can still be defects in the code looming around until the right conditions are met to trigger them. On the other hand, though, _every failure can be traced back to the defect that causes it_. Our job is to break the cause-effect chain. From Failure to DefectTo find a defect from a failure, we _trace back_ the faults along their _propagation_ – that is, we find out which faults in the earlier state have caused the later faults. We start from the very end of the execution and then gradually progress backwards in time, examining fault after fault until we find a _transition_ from a correct state to a faulty state – that is, astep in which a correct state comes in and a faulty state comes out. At this point, we have found the origin of the failure – and the defect that causes it. What sounds like a straight-forward strategy, unfortunately, doesn't always work this way in practice. That is because of the following problems of debugging:* First, program states are actually _large_, encompassing dozens to thousands of variables, possibly even more. If you have to search all of these manually and check them for faults, you will spend a lot of time for a single state.* Second, you do not always know _whether a state is correct or not._ While most programs have some form of specification for their inputs and outputs, these do not necessarily exist for intermediate results. If one had a specification that could check each state for correctness (possibly even automatically), debugging would be trivial. Unfortunately, it is not, and that's partly due to the lack of specifications.* Third, executions typically do not come in a handful of steps, as in the diagrams above; instead, they can easily encompass _thousands to millions of steps._ This means that you will have to examine not just one state, but several, making the problem much worse.To make your search efficient, you thus have to _focus_ your search – starting with most likely causes and gradually progressing to the less probable causes. This is what we call a _debugging strategy_. The Scientific MethodNow that we know how failures come to be, let's look into how to systematically find their causes. What we need is a _strategy_ that helps us search for how and when the failure comes to be. For this, we use a process called the *scientific method*. When we are debugging a program, we are trying to find the causes of a given effect – very much like natural scientists try to understand why things in nature are as they are and how they come to be. Over thousands of years, scientists have conducted _observations_ and _experiments_ to come to an understanding of how our world works. The process by which experimental scientists operate has been coined "The scientific method". This is how it works: 1. Formulate a _question_, as in "Why does this apple fall down?".2. Invent a _hypothesis_ based on knowledge obtained while formulating the question, that may explain the observed behavior. 3. Determining the logical consequences of the hypothesis, formulate a _prediction_ that can _support_ or _refute_ the hypothesis. Ideally, the prediction would distinguish the hypothesis from likely alternatives.4. _Test_ the prediction (and thus the hypothesis) in an _experiment_. If the prediction holds, confidence in the hypothesis increases; otherwise, it decreases.5. Repeat Steps 2–4 until there are no discrepancies between hypothesis and predictions and/or observations. At this point, your hypothesis may be named a *theory* – that is, a predictive and comprehensive description of some aspect of the natural world. The gravitational theory, for instance, predicts very well how the moon revolves around the earth, and how the earth revolves around the sun. Our debugging problems are of a slightly lesser scale – we'd like a theory of how our failure came to be – but the process is pretty much the same.
###Code
dot = graph()
dot.node('Hypothesis')
dot.node('Observation')
dot.node('Prediction')
dot.node('Experiment')
dot.edge('Hypothesis', 'Observation',
label="<Hypothesis<BR/>is <I>supported:</I><BR/>Refine it>",
dir='back')
dot.edge('Hypothesis', 'Prediction')
dot.node('Problem Report', shape='none', fillcolor='white')
dot.edge('Problem Report', 'Hypothesis')
dot.node('Code', shape='none', fillcolor='white')
dot.edge('Code', 'Hypothesis')
dot.node('Runs', shape='none', fillcolor='white')
dot.edge('Runs', 'Hypothesis')
dot.node('More Runs', shape='none', fillcolor='white')
dot.edge('More Runs', 'Hypothesis')
dot.edge('Prediction', 'Experiment')
dot.edge('Experiment', 'Observation')
dot.edge('Observation', 'Hypothesis',
label="<Hypothesis<BR/>is <I>rejected:</I><BR/>Seek alternative>")
display(dot)
###Output
_____no_output_____
###Markdown
In debugging, we proceed the very same way – indeed, we are treating bugs as if they were natural phenomena. This analogy may sound far-fetched, as programs are anything but natural. Nature, by definition, is not under our control. But bugs are _out of our control just as well._ Hence, the analogy is not that far-fetched – and we can apply the same techniques for debugging. Finding a Hypothesis Let us apply the scientific method to our Python program which removes HTML tags. First of all, let us recall the problem – `remove_html_markup()` works for some inputs, but fails on others.
###Code
for i, html in enumerate(['<b>foo</b>',
'<b>"foo"</b>',
'"<b>foo</b>"',
'<"b">foo</"b">']):
result = remove_html_markup(html)
print("%-2d %-15s %s" % (i + 1, html, result))
###Output
1 <b>foo</b> foo
2 <b>"foo"</b> foo
3 "<b>foo</b>" <b>foo</b>
4 <"b">foo</"b"> foo
###Markdown
Input 1 and 4 work as expected, the others do not. We can write these down in a table, such that we can always look back at our previous results:|Input|Expectation|Output|Outcome||-----|-----------|------|-------||`foo`|`foo`|`foo`|✔||`"foo"`|`"foo"`|`foo`|✘||`"foo"`|`"foo"`|`foo`|✘||`foo`|`foo`|`foo`|✔|
###Code
quiz("From the difference between success and failure,"
" we can already devise some observations about "
" what's wrong with the output."
" Which of these can we turn into general hypotheses?",
["Double quotes are stripped from the tagged input.",
"Tags in double quotes are not stripped.",
"The tag '<b>' is always stripped from the input.",
"Four-letter words are stripped."], [298 % 33, 1234 % 616])
###Output
_____no_output_____
###Markdown
Testing a HypothesisThe hypotheses that remain are:1. Double quotes are stripped from the tagged input.2. Tags in double quotes are not stripped. These may be two separate issues, but chances are they are tied to each other. Let's focus on 1., because it is simpler. Does it hold for all inputs, even untagged ones? Our hypothesis becomes1. Double quotes are stripped from the ~~tagged~~ input. Let's devise an experiment to validate this. If we feed the string```html"foo"```(including the double quotes) into `remove_html_markup()`, we should obtain```html"foo"```as result – that is, the output should be the unchanged input. However, if our hypothesis 1. is correct, we should obtain```htmlfoo```as result – that is, "Double quotes are stripped from the input" as predicted by the hypothesis. We can very easily test this hypothesis:
###Code
remove_html_markup('"foo"')
###Output
_____no_output_____
###Markdown
Our hypothesis is confirmed! We can add this to our list of observations. |Input|Expectation|Output|Outcome||-----|-----------|------|-------||`foo`|`foo`|`foo`|✔||`"foo"`|`"foo"`|`foo`|✘||`"foo"`|`"foo"`|`foo`|✘||`foo`|`foo`|`foo`|✔||`"foo"`|`"foo"`|`foo`|✘| You can try out the hypothesis with more inputs – and it remains valid. Any non-markup input that contains double quotes will have these stripped. Where does that quote-stripping come from? This is where we need to explore the cause-effect chain. The only place in `remove_html_markup()` where quotes are handled is this line:```pythonelif c == '"' or c == "'" and tag: quote = not quote```So, quotes should be removed only if `tag` is set. However, `tag` can be set only if the input contains a markup tag, which is not the case for a simple input like `"foo"`. Hence, what we observe is actually _impossible._ Yet, it happens. Refining a HypothesisDebugging is a game of falsifying assumptions. You assume the code works – it doesn't. You assume the `tag` flag cannot be set – yet it may be. What do we do? Again, we create a hypothesis:1. The error is due to `tag` being set. How do we know whether tag is being set? Let me introduce one of the most powerful debugging tools ever invented, the `assert` statement. The statement```pythonassert cond```evaluates the given condition `cond` and* if it holds: proceed as usual* if `cond` does not hold: throw an exceptionAn `assert` statement _encodes our assumptions_ and as such, should never fail. If it does, well, then something is wrong. Using `assert`, we can check the value of `tag` all through the loop:
###Code
def remove_html_markup_with_tag_assert(s):
tag = False
quote = False
out = ""
for c in s:
assert not tag # <=== Just added
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif c == '"' or c == "'" and tag:
quote = not quote
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
Our expectation is that this assertion would fail. So, do we actually get an exception? Try it out for yourself by uncommenting the following line:
###Code
# remove_html_markup_with_tag_assert('"foo"')
quiz("What happens after inserting the above assertion?",
["The program raises an exception. (i.e., tag is set)",
"The output is as before, i.e., foo without quotes."
" (which means that tag is not set)"],
2)
###Output
_____no_output_____
###Markdown
Here's the solution:
###Code
with ExpectError():
result = remove_html_markup_with_tag_assert('"foo"')
result
###Output
_____no_output_____
###Markdown
Refuting a HypothesisWe did not get an exception, hence we reject our hypothesis:1. ~~The error is due to `tag` being set.~~ Again, let's go back to the only place in our code where quotes are handled:```pythonelif c == '"' or c == "'" and tag: quote = not quote```Because of the assertion, we already know that `tag` is always False. Hence, this condition should never hold either. But maybe there's something wrong with the condition such that it holds? Here's our hypothesis:1. The error is due to the quote condition evaluating to true If the condition evaluates to true, then `quote` should be set. We could now go and assert that `quote` is false; but we only care about the condition. So we insert an assertion that assumes that setting the code setting the `quote` flag is never reached:
###Code
def remove_html_markup_with_quote_assert(s):
tag = False
quote = False
out = ""
for c in s:
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif c == '"' or c == "'" and tag:
assert False # <=== Just added
quote = not quote
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
Our expectation this time again is that the assertion fails. So, do we get an exception this time? Try it out for yourself by uncommenting the following line:
###Code
# remove_html_markup_with_quote_assert('"foo"')
quiz("What happens after inserting the 'assert' tag?",
["The program raises an exception (i.e., the quote condition holds)",
"The output is still foo (i.e., the quote condition does not hold)"], 29 % 7)
###Output
_____no_output_____
###Markdown
Here's what happens now that we have the `assert` tag:
###Code
with ExpectError():
result = remove_html_markup_with_quote_assert('"foo"')
###Output
Traceback (most recent call last):
File "<ipython-input-49-9ce255289291>", line 2, in <module>
result = remove_html_markup_with_quote_assert('"foo"')
File "<ipython-input-46-9c8a53a91780>", line 12, in remove_html_markup_with_quote_assert
assert False # <=== Just added
AssertionError (expected)
###Markdown
From this observation, we can deduce that our hypothesis is _confirmed_:1. The error is due to the quote condition evaluating to true (CONFIRMED)and the _condition is actually faulty._ It evaluates to True although `tag` is always False:```pythonelif c == '"' or c == "'" and tag: quote = not quote```But this condition holds for single and double quotes. Is there a difference? Let us see whether our observations generalize towards general quotes:1. ~~Double~~ quotes are stripped from the input. We can verify these hypotheses with an additional experiment. We go back to our original implementation (without any asserts), and then check it:
###Code
remove_html_markup("'foo'")
###Output
_____no_output_____
###Markdown
Surprise: Our hypothesis is rejected and we can add another observation to our table:|Input|Expectation|Output|Outcome||-----|-----------|------|-------||`'foo'`|`'foo'`|`'foo'`|✔|So, the condition* becomes True when a double quote is seen* becomes False (as it should) with single quotes At this point, you should have enough material to solve the problem. How do we have to fix the condition? Here are four alternatives:```pythonc == "" or c == '' and tag Choice 1c == '"' or c == "'" and not tag Choice 2(c == '"' or c == "'") and tag Choice 3... Something else```
###Code
quiz("How should the condition read?",
["Choice 1", "Choice 2", "Choice 3", "Something else"],
399 % 4)
###Output
_____no_output_____
###Markdown
Fixing the Bug So, you have spotted the defect: In Python (and most other languages), `and` takes precedence over `or`, which is why the condition is wrong. It should read:```python(c == '"' or c == "'") and tag```(Actually, good programmers rarely depend on precedence; it is considered good style to use parentheses lavishly.) So, our hypothesis now has become1. The error is due to the quote condition evaluating to true Is this our final hypothesis? We can check our earlier examples whether they should now work well:|Input|Expectation|Output|Outcome||-----|-----------|------|-------||`foo`|`foo`|`foo`|✔||`"foo"`|`"foo"`|`foo`|✘||`"foo"`|`"foo"`|`foo`|✘||`foo`|`foo`|`foo`|✔||`"foo"`|`'foo'`|`foo`|✘||`'foo'`|`'foo'`|`'foo'`|✔|In all of these examples, the `quote` flag should now be set outside of tags; hence, everything should work as expected. In terms of the scientific process, we now have a *theory* – a hypothesis that* is consistent with all earlier observations* predicts future observations (in our case: correct behavior)For debugging, our problems are usually too small for a big word like theory, so we use the word *diagnosis* instead. You should start to fix your code if and only if you have a diagnosis. So we actually go and fix the code accordingly:
###Code
def remove_html_markup(s):
tag = False
quote = False
out = ""
for c in s:
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif (c == '"' or c == "'") and tag: # <-- FIX
quote = not quote
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
We verify that the fix was successful by running our earlier tests. Not only should the previously failing tests now pass, the previously passing tests also should not be affected. Fortunately, all tests now pass:
###Code
assert remove_html_markup("Here's some <strong>strong argument</strong>.") == \
"Here's some strong argument."
assert remove_html_markup(
'<input type="text" value="<your name>">') == ""
assert remove_html_markup('<b>foo</b>') == 'foo'
assert remove_html_markup('<b>"foo"</b>') == '"foo"'
assert remove_html_markup('"<b>foo</b>"') == '"foo"'
assert remove_html_markup('<"b">foo</"b">') == 'foo'
###Output
_____no_output_____
###Markdown
So, the hypothesis was a theory after all, and our diagnosis was correct. Alternate PathsNote that there are many ways by which we can get to the defect. We could also have started with our other hypothesis2. Tags in double quotes are not strippedand by reasoning and experiments, we would have reached the same conclusion that the condition is faulty:* To strip tags, the `tag` flag must be set (but it is not).* To set the `tag` flag, the `quote` variable must not be set (but it is).* The `quote` flag is set under the given condition (which thus must be faulty). But just fixing is not enough. We also must make sure the error does not occur again. How can we do that? With our assertions, above, we already have a test suite that should catch several errors – but not all.To be 100% sure, we could add an assertion to `remove_html_markup()` that checks the final result for correctness. Unfortunately, writing such an assertion is just as complex as writing the function itself.There is one assertion, though, which could be placed in the loop body to catch this kind of errors, and which could remain in the code. Which is it?
###Code
quiz("Which assertion would have caught the problem?",
["assert quote and not tag",
"assert quote or not tag",
"assert tag or not quote",
"assert tag and not quote"],
3270 - 3267)
###Output
_____no_output_____
###Markdown
Indeed, the statement```pythonassert tag or not quote```is correct. This excludes the situation of ¬`tag` ∧ `quote` – that is, the `tag` flag is not set, but the `quote` flag is. If you remember our state machine from above, this is actually a state that should never exist:
###Code
display(state_machine)
###Output
_____no_output_____
###Markdown
Here's our function in its "final" state. As software goes, software is never final – and this may also hold for our function, as there is still room for improvement. For this chapter though, we leave it be.
###Code
def remove_html_markup(s):
tag = False
quote = False
out = ""
for c in s:
assert tag or not quote
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif (c == '"' or c == "'") and tag:
quote = not quote
elif not tag:
out = out + c
return out
remove_html_markup('"<b>"foo"</b>"')
## Become a Better Debugger
We have now systematically fixed a bug. In this book, we will explore a number of techniques to make debugging easier – coming up with automated diagnoses, explanations, even automatic repairs, including for our example above. But there are also number of things _you_ can do to become a better debugger.
### Follow the Process
If you're an experienced programmer, you may have spotted the problem in `remove_html_markup()` immediately, and start fixing the code right away. But this is dangerous and risky.
Why is this so? Well, because you should first
* try to understand the problem, and
* have a full diagnosis before starting to fix away.
You _can_ skip these steps, and jump right to your interactive debugger the very moment you see a failure, happily stepping through their program. This may even work well for simple problems, including this one. The risk, however, is that this narrows your view to just this one execution, which limits your ability to understand _all_ the circumstances of the problem. Even worse: If you start "fixing" the bug without exactly understanding the problem, you may end up with an incomplete solution – as illustrated in "The Devil's Guide to Debugging", above.
###Output
_____no_output_____
###Markdown
Indeed, the statement```pythonassert tag or not quote```is correct. This excludes the situation of ¬`tag` ∧ `quote` – that is, the `tag` flag is not set, but the `quote` flag is. If you remember our state machine from above, this is actually a state that should never exist:
###Code
display(state_machine)
###Output
_____no_output_____
###Markdown
Here's our function in its "final" state. As software goes, software is never final – and this may also hold for our function, as there is still room for improvement. For this chapter though, we leave it be.
###Code
def remove_html_markup(s):
tag = False
quote = False
out = ""
for c in s:
assert tag or not quote
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif (c == '"' or c == "'") and tag:
quote = not quote
elif not tag:
out = out + c
return out
remove_html_markup('"<b>"foo"</b>"')
###Output
_____no_output_____
###Markdown
Become a Better DebuggerWe have now systematically fixed a bug. In this book, we will explore a number of techniques to make debugging easier – coming up with automated diagnoses, explanations, even automatic repairs, including for our example above. But there are also number of things _you_ can do to become a better debugger. Follow the ProcessIf you're an experienced programmer, you may have spotted the problem in `remove_html_markup()` immediately, and start fixing the code right away. But this is dangerous and risky.Why is this so? Well, because you should first* try to understand the problem, and * have a full diagnosis before starting to fix away.You _can_ skip these steps, and jump right to your interactive debugger the very moment you see a failure, happily stepping through their program. This may even work well for simple problems, including this one. The risk, however, is that this narrows your view to just this one execution, which limits your ability to understand _all_ the circumstances of the problem. Even worse: If you start "fixing" the bug without exactly understanding the problem, you may end up with an incomplete solution – as illustrated in "The Devil's Guide to Debugging", above. Keep a LogA second risk of starting debugging too soon is that it lets you easily deviate from a systematic process. Remember how we wrote down every experiment in a table? How we numbered every hypothesis? This is not just for teaching. Writing these things down explicitly allow you to keep track of all your observations and hypotheses over time.|Input|Expectation|Output|Outcome||-----|-----------|------|-------||`foo`|`foo`|`foo`|✔|Every time you come up with a new hypothesis, you can immediately check it against your earlier observations, which will help you eliminating unlikely ones from the start. This is a bit like in the classic "Mastermind" board game, in which you have to guess some secret combination of pins, and in which you opponent gives you hints on whether and how your guesses are correct. At any time, you can see your previous guesses (experiments) and the results (observations) you got; any new guess (hypothesis) as to be consistent with the previous observations and experiments. ![Master Mind Board Grame](https://upload.wikimedia.org/wikipedia/commons/2/2d/Mastermind.jpg) Keeping such a log also allows you to interrupt your debugging session at any time. You can be home in time, sleep over the problem, and resume the next morning with a refreshed mind. You can even hand over the log to someone else, stating your findings so far.The alternative to having a log is to _keep all in memory_. This only works for short amounts of time, as puts a higher and higher cognitive load on your memory as you debug along. After some time, you will forget earlier observations, which leads to mistakes. Worst of all, any interruption will break your concentration and make you forget things, so you can't stop debugging until you're done.Sure, if you are a real master, you can stay glued to the screen all night. But I'd rather be home in time, thank you. RubberduckingA great technique to revisit your observations and to come up with new hypotheses is to _explain the problem to someone else_. In this process, the "someone else" is important, but even more important is that _you are explaining the problem to yourself_! As Kernighan and Pike \cite{Kernighan1999} put it:> Sometimes it takes no more than a few sentences, followed by an embarrassed "Never mind. I see what's wrong. Sorry to bother you."The reason why this works is that teaching someone else forces you to take different perspectives, and these help you resolving the inconsistency between what you assume and what you actually observe.Since that "someone else" can be totally passive, you can even replace her with an inanimate object to talk to – even a rubber duck. This technique is called *rubber duck debugging* or *rubberducking* – the idea is that you explain your problem to a rubber duck first before interrupting one of your co-workers with the problem. Some programmers, when asked for advice, explicitly request that you "explain your problem to the duck first", knowing that this resolves a good fraction of problems. ![Rubber duck debugging](https://upload.wikimedia.org/wikipedia/commons/d/d5/Rubber_duck_assisting_with_debugging.jpg) The Cost of Debugging\todo{add recent stuff on how much time debugging takes}And it's not only that debugging takes time – the worst thing is that it is a search process, which can take anything between a few minutes and several hours, sometimes even days and weeks. But even if you never know how much time a bug will take, it's a bit of blessing to use a process which gradually gets you towards its cause. History of DebuggingEngineers and programmers have long used the term "bug" for faults in their systems – as if it were something that crept into an otherwise flawless program to cause the effects that none could explain. And from a psychological standpoint, it is far easier to blame some "bug" rather than taking responsibility ourselves. In the end, though, we have to face the fact: We made the bugs, and they are ours to fix.Having said that, there has been one recorded instance where a real bug has crept into a system. That was on September 9, 1947, when a moth got stuck in the relay of a Harvard Mark II machine. This event was logged, and the log book is now on display at the Smithsonian Natural Museum of American History, as "First actual case of bug being found." ![First actual case of bug being found](https://upload.wikimedia.org/wikipedia/commons/f/ff/First_Computer_Bug%2C_1945.jpg) The actual term "bug", however, is much older. What do you think is its origin?
###Code
import hashlib
bughash = hashlib.md5(b"debug").hexdigest()
quiz('Where has the name "bug" been used to denote disruptive events?',
[
'In the early days of Morse telegraphy, referring to a special key '
'that would send a string of dots',
'Among radio technicians to describe a device that '
'converts electromagnetic field variations into acoustic signals',
"In Shakespeare's " '"Henry VI", referring to a walking spectre',
'In Middle English, where the word "bugge" is the basis for terms '
'like "bugbear" and "bugaboo"'
],
[bughash.index(i) for i in "d42f"]
)
###Output
_____no_output_____
###Markdown
(Source: \cite{jargon}, \cite{wikipedia:debugging}) Synopsis In this chapter, we introduce some basics of how failures come to be as well as a general process for debugging. Lessons Learned1. An _error_ is a deviation from what is correct, right, or true. Specifically, * A _mistake_ is a human act or decision resulting in an error. * A _defect_ is an error in the program code. Also called *bug*. * A _fault_ is an error in the program state. Also called *infection*. * A _failure_ is an externally visible error in the program behavior. Also called *malfunction*.2. In a failing program execution, a mistake by the programmer results in a defect in the code, which creates a fault in the state, which propagates until it results in a failure. Tracing back fault propagation allows to identify the defect that causes the failure.3. In debugging, the _scientific method_ allows to systematically identify failure causes by gradually refining and refuting hypotheses based on experiments and observations.4. You can become a better debugger by * Following a systematic process like the scientific method * Keeping a log of your observations and hypotheses * Making your observations and conclusions explicit by telling them somebody (or something). Next StepsIn the next chapters, we will learn how to* [trace and observe executions](Tracer.ipynb)* [build your own interactive debugger](Debugger.ipynb)* [locate defects automatically by correlating failures and code coverage](StatisticalDebugger.ipynb)* [identify and simplify failure-inducing inputs](Reducer.ipynb)Enjoy! BackgroundThere are several good books on debugging, but these three are especially recommended:* _Debugging_ by Agans \cite{agans2006-debugging} takes a pragmatic approach to debugging, highlighting systematic approaches that help for all kinds of application-specific problems;* _Why Programs Fail_ by Zeller \cite{zeller2009-why-programs-fail} takes a more academic approach, creating theories of how failures come to be and systematic debugging processes;* _Effective Debugging_ by Spinellis \cite{spinellis2016-effective-debugging} aims for a middle ground between the two, creating general recipes and recommendations that easily instantiate towards specific problems.All these books focus on _manual_ debugging and the debugging process, just like this chapter; for _automated_ debugging, simply read on :-) Exercises Exercise 1: Get Acquainted with Notebooks and PythonYour first exercise in this book is to get acquainted with notebooks and Python, such that you can run the code examples in the book – and try out your own. Here are a few tasks to get you started. Beginner Level: Run Notebooks in Your BrowserThe easiest way to get access to the code is to run them in your browser.1. From the [Web Page](__CHAPTER_HTML__), check out the menu at the top. Select `Resources` $\rightarrow$ `Edit as Notebook`.2. After a short waiting time, this will open a Jupyter Notebook right within your browser, containing the current chapter as a notebook.3. You can again scroll through the material, but you click on any code example to edit and run its code (by entering Shift + Return). You can edit the examples as you please.4. Note that code examples typically depend on earlier code, so be sure to run the preceding code first.5. Any changes you make will not be saved (unless you save your notebook to disk).For help on Jupyter Notebooks, from the [Web Page](__CHAPTER_HTML__), check out the `Help` menu. Advanced Level: Run Python Code on Your MachineThis is useful if you want to make greater changes, but do not want to work with Jupyter.1. From the [Web Page](__CHAPTER_HTML__), check out the menu at the top. Select `Resources` $\rightarrow$ `Download Code`. 2. This will download the Python code of the chapter as a single Python .py file, which you can save to your computer.3. You can then open the file, edit it, and run it in your favorite Python environment to re-run the examples.4. Most importantly, you can [import it](Importing.ipynb) into your own code and reuse functions, classes, and other resources.For help on Python, from the [Web Page](__CHAPTER_HTML__), check out the `Help` menu. Pro Level: Run Notebooks on Your MachineThis is useful if you want to work with Jupyter on your machine. This will allow you to also run more complex examples, such as those with graphical output.1. From the [Web Page](__CHAPTER_HTML__), check out the menu at the top. Select `Resources` $\rightarrow$ `All Notebooks`. 2. This will download all Jupyter Notebooks as a collection of .ipynb files, which you can save to your computer.3. You can then open the notebooks in Jupyter Notebook or Jupyter Lab, edit them, and run them. To navigate across notebooks, open the notebook [`00_Table_of_Contents.ipynb`](00_Table_of_Contents.ipynb).4. You can also download individual notebooks using Select `Resources` $\rightarrow$ `Download Notebook`. Running these, however, will require that you have the other notebooks downloaded already.For help on Jupyter Notebooks, from the [Web Page](__CHAPTER_HTML__), check out the `Help` menu. Boss Level: Contribute!This is useful if you want to contribute to the book with patches or other material. It also gives you access to the very latest version of the book.1. From the [Web Page](__CHAPTER_HTML__), check out the menu at the top. Select `Resources` $\rightarrow$ `Project Page`. 2. This will get you to the GitHub repository which contains all sources of the book, including the latest notebooks.3. You can then _clone_ this repository to your disk, such that you get the latest and greatest.4. You can report issues and suggest pull requests on the GitHub page.5. Updating the repository with `git pull` will get you updated.If you want to contribute code or text, check out the [Guide for Authors](Guide_for_Authors.ipynb). Exercise 2: More Bugs!You may have noticed that our `remove_html_markup()` function is still not working perfectly under all circumstances. The error has something to do with different quotes occurring in the input. Part 1: Find the ProblemWhat does the problem look like? Set up a test case that demonstrates the problem.
###Code
assert(...)
###Output
_____no_output_____
###Markdown
Set up additional test cases as useful. **Solution.** The remaining problem stems from the fact that in `remove_html_markup()`, we do not differentiate between single and double quotes. Hence, if we have a _quote within a quoted text_, the function may get confused. Notably, a string that begins with a double quote may be interpreted as ending when a single quote is seen, and vice versa. Here's an example of such a string:```html">foo``` When we remove the HTML markup, the `>` in the string is interpreted as _unquoted_. Hence, it is interpreted as ending the tag, such that the rest of the tag is not removed.
###Code
s = '<b title="<Shakespeare' + "'s play>" + '">foo</b>'
s
remove_html_markup(s)
with ExpectError():
assert(remove_html_markup(s) == "foo")
###Output
Traceback (most recent call last):
File "<ipython-input-63-00bc84e50798>", line 2, in <module>
assert(remove_html_markup(s) == "foo")
AssertionError (expected)
###Markdown
Part 2: Identify Extent and CauseUsing the scientific method, identify the extent and cause of the problem. Write down your hypotheses and log your observations, as in|Input|Expectation|Output|Outcome||-----|-----------|------|-------||(input)|(expectation)|(output)|(outcome)| **Solution.** The first step is obviously|Input|Expectation|Output|Outcome||-----|-----------|------|-------||">foo|foo|"foo|✘| Part 3: Fix the ProblemDesign a fix for the problem. Show that it satisfies the earlier tests and does not violate any existing test. **Solution**. Here's an improved implementation that actually tracks the opening and closing quote by storing the quoting character in the `quote` variable. (If `quote` is `''`, we are not in a string.)
###Code
def remove_html_markup_with_proper_quotes(s):
tag = False
quote = ''
out = ""
for c in s:
assert tag or quote == ''
if c == '<' and quote == '':
tag = True
elif c == '>' and quote == '':
tag = False
elif (c == '"' or c == "'") and tag and quote == '':
# beginning of string
quote = c
elif c == quote:
# end of string
quote = ''
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
Python enthusiasts may note that we could also write `not quote` instead of `quote == ''`, leaving most of the original code untouched. We stick to classic Boolean comparisons here. The function now satisfies the earlier failing test:
###Code
assert(remove_html_markup_with_proper_quotes(s) == "foo")
###Output
_____no_output_____
###Markdown
as well as all our earlier tests:
###Code
assert remove_html_markup_with_proper_quotes(
"Here's some <strong>strong argument</strong>.") == \
"Here's some strong argument."
assert remove_html_markup_with_proper_quotes(
'<input type="text" value="<your name>">') == ""
assert remove_html_markup_with_proper_quotes('<b>foo</b>') == 'foo'
assert remove_html_markup_with_proper_quotes('<b>"foo"</b>') == '"foo"'
assert remove_html_markup_with_proper_quotes('"<b>foo</b>"') == '"foo"'
assert remove_html_markup_with_proper_quotes('<"b">foo</"b">') == 'foo'
###Output
_____no_output_____
###Markdown
Introduction to DebuggingIn this book, we want to explore _debugging_ - the art and science of fixing bugs in computer software. In particular, we want to explore techniques that _automatically_ answer questions like: Where is the bug? When does it occur? And how can we repair it? But before we start automating the debugging process, we first need to understand what this process is.In this chapter, we introduce basic concepts of systematic software debugging and the debugging process, and at the same time get acquainted with Python and interactive notebooks.
###Code
from bookutils import YouTubeVideo, quiz
YouTubeVideo("bCHRCehDOq0")
###Output
_____no_output_____
###Markdown
**Prerequisites*** The book is meant to be a standalone reference; however, a number of _great books on debugging_ are listed at the end,* Knowing a bit of _Python_ is helpful for understanding the code examples in the book. SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.Intro_Debugging import ```and then make use of the following features.In this chapter, we introduce some basics of how failures come to be as well as a general process for debugging. A Simple Function Your Task: Remove HTML MarkupLet us start with a simple example. You may have heard of how documents on the Web are made out of text and HTML markup. HTML markup consists of _tags_ in angle brackets that surround the text, providing additional information on how the text should be interpreted. For instance, in the HTML text```htmlThis is emphasized.```the word "emphasized" is enclosed in the HTML tags `` (start) and `` (end), meaning that it should be interpreted (and rendered) in an emphasized way – typically in italics. In your environment, the HTML text gets rendered as> This is emphasized.There's HTML tags for pretty much everything – text markup (bold text, strikethrough text), text structure (titles, lists), references (links) to other documents, and many more. These HTML tags shape the Web as we know it. However, within all the HTML markup, it may become difficult to actually access the _text_ that lies within. We'd like to implement a simple function that removes _HTML markup_ and converts it into text. If our input is```htmlHere's some strong argument.```the output should be> Here's some strong argument. Here's a Python function which does exactly this. It takes a (HTML) string and returns the text without markup.
###Code
def remove_html_markup(s):
tag = False
out = ""
for c in s:
if c == '<': # start of markup
tag = True
elif c == '>': # end of markup
tag = False
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
This function works, but not always. Before we start debugging things, let us first explore its code and how it works. Understanding Python ProgramsIf you're new to Python, you might first have to understand what the above code does. We very much recommend the [Python tutorial](https://docs.python.org/3/tutorial/) to get an idea on how Python works. The most important things for you to understand the above code are these three:1. Python structures programs through _indentation_, so the function and `for` bodies are defined by being indented;2. Python is _dynamically typed_, meaning that the type of variables like `c`, `tag`, or `out` is determined at run-time.3. Most of Python's syntactic features are inspired by other common languages, such as control structures (`while`, `if`, `for`), assignments (`=`), or comparisons (`==`, `!=`, `<`).With that, you can already understand what the above code does: `remove_html_markup()` takes a (HTML) string `s` and then iterates over the individual characters (`for c in s`). By default, these characters are added to the return string `out`. However, if `remove_html_markup()` finds a `` character is found.In contrast to other languages, Python makes no difference between strings and characters – there's only strings. As in HTML, strings can be enclosed in single quotes (`'a'`) and in double quotes (`"a"`). This is useful if you want to specify a string that contains quotes, as in `'She said "hello", and then left'` or `"The first character is a 'c'"` Running a FunctionTo find out whether `remove_html_markup()` works correctly, we can *test* it with a few values. For the string```htmlHere's some strong argument.```for instance, it produces the correct value:
###Code
remove_html_markup("Here's some <strong>strong argument</strong>.")
###Output
_____no_output_____
###Markdown
Interacting with NotebooksIf you are reading this in the interactive notebook, you can try out `remove_html_markup()` with other values as well. Click on the above cells with the invocation of `remove_html_markup()` and change the value – say, to `remove_html_markup("foo")`. Press Shift+Enter (or click on the play symbol) to execute it and see the result. If you get an error message, go to the above cell with the definition of `remove_html_markup()` and execute this first. You can also run _all_ cells at once; see the Notebook menu for details. (You can actually also change the text by clicking on it, and corect mistaks such as in this sentence.) Executing a single cell does not execute other cells, so if your cell builds on a definition in another cell that you have not executed yet, you will get an error. You can select `Run all cells above` from the menu to ensure all definitions are set. Also keep in mind that, unless overwritten, all definitions are kept across executions. Occasionally, it thus helps to _restart the kernel_ (i.e. start the Python interpreter from scratch) to get rid of older, superfluous definitions. Testing a Function Since one can change not only invocations, but also definitions, we want to ensure that our function works properly now and in the future. To this end, we introduce tests through _assertions_ – a statement that fails if the given _check_ is false. The following assertion, for instance, checks that the above call to `remove_html_markup()` returns the correct value:
###Code
assert remove_html_markup("Here's some <strong>strong argument</strong>.") == \
"Here's some strong argument."
###Output
_____no_output_____
###Markdown
If you change the code of `remove_html_markup()` such that the above assertion fails, you will have introduced a bug. Oops! A Bug! As nice and simple as `remove_html_markup()` is, it is buggy. Some HTML markup is not properly stripped away. Consider this HTML tag, which would render as an input field in a form:```html">```If we feed this string into `remove_html_markup()`, we would expect an empty string as the result. Instead, this is what we get:
###Code
remove_html_markup('<input type="text" value="<your name>">')
###Output
_____no_output_____
###Markdown
Every time we encounter a bug, this means that our earlier tests have failed. We thus need to introduce another test that documents not only how the bug came to be, but also the result we actually expected. The assertion we write now fails with an error message. (The `ExpectError` magic ensures we see the error message, but the rest of the notebook is still executed.)
###Code
from ExpectError import ExpectError
with ExpectError():
assert remove_html_markup('<input type="text" value="<your name>">') == ""
###Output
Traceback (most recent call last):
File "<ipython-input-7-c7b482ebf524>", line 2, in <module>
assert remove_html_markup('<input type="text" value="<your name>">') == ""
AssertionError (expected)
###Markdown
With this, we now have our task: _Fix the failure as above._ Visualizing CodeTo properly understand what is going on here, it helps drawing a diagram on how `remove_html_markup()` works. Technically, `remove_html_markup()` implements a _state machine_ with two states `tag` and `¬ tag`. We change between these states depending on the characters we process. This is visualized in the following diagram:
###Code
from graphviz import Digraph, nohtml
from IPython.display import display
# ignore
PASS = "✔"
FAIL = "✘"
PASS_COLOR = 'darkgreen' # '#006400' # darkgreen
FAIL_COLOR = 'red4' # '#8B0000' # darkred
STEP_COLOR = 'peachpuff'
FONT_NAME = 'Raleway'
# ignore
def graph(comment="default"):
return Digraph(name='', comment=comment, graph_attr={'rankdir': 'LR'},
node_attr={'style': 'filled',
'fillcolor': STEP_COLOR,
'fontname': FONT_NAME},
edge_attr={'fontname': FONT_NAME})
# ignore
state_machine = graph()
state_machine.node('Start', )
state_machine.edge('Start', '¬ tag')
state_machine.edge('¬ tag', '¬ tag', label=" ¬ '<'\nadd character")
state_machine.edge('¬ tag', 'tag', label="'<'")
state_machine.edge('tag', '¬ tag', label="'>'")
state_machine.edge('tag', 'tag', label="¬ '>'")
# ignore
display(state_machine)
###Output
_____no_output_____
###Markdown
You see that we start in the non-tag state (`¬ tag`). Here, for every character that is not `''` character. A First FixLet us now look at the above state machine, and process through our input:```html">``` So what you can see is: We are interpreting the `'>'` of `""` as the closing of the tag. However, this is a quoted string, so the `'>'` should be interpreted as a regular character, not as markup. This is an example of _missing functionality:_ We do not handle quoted characters correctly. We haven't claimed yet to take care of all functionality, so we still need to extend our code. So we extend the whole thing. We set up a special "quote" state which processes quoted inputs in tags until the end of the quoted string is reached. This is how the state machine looks like:
###Code
# ignore
state_machine = graph()
state_machine.node('Start')
state_machine.edge('Start', '¬ quote\n¬ tag')
state_machine.edge('¬ quote\n¬ tag', '¬ quote\n¬ tag',
label="¬ '<'\nadd character")
state_machine.edge('¬ quote\n¬ tag', '¬ quote\ntag', label="'<'")
state_machine.edge('¬ quote\ntag', 'quote\ntag', label="'\"'")
state_machine.edge('¬ quote\ntag', '¬ quote\ntag', label="¬ '\"' ∧ ¬ '>'")
state_machine.edge('quote\ntag', 'quote\ntag', label="¬ '\"'")
state_machine.edge('quote\ntag', '¬ quote\ntag', label="'\"'")
state_machine.edge('¬ quote\ntag', '¬ quote\n¬ tag', label="'>'")
# ignore
display(state_machine)
###Output
_____no_output_____
###Markdown
This is a bit more complex already. Proceeding from left to right, we first have the state `¬ quote ∧ ¬ tag`, which is our "standard" state for text. If we encounter a `'<'`, we again switch to the "tagged" state `¬ quote ∧ tag`. In this state, however (and only in this state), if we encounter a quotation mark, we switch to the "quotation" state `quote ∧ tag`, in which we remain until we see another quotation mark indicating the end of the string – and then continue in the "tagged" state `¬ quote ∧ tag` until we see the end of the string. Things get even more complicated as HTML allows both single and double quotation characters. Here's a revised implementation of `remove_html_markup()` that takes the above states into account:
###Code
def remove_html_markup(s):
tag = False
quote = False
out = ""
for c in s:
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif c == '"' or c == "'" and tag:
quote = not quote
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
Now, our previous input works well:
###Code
remove_html_markup('<input type="text" value="<your name>">')
###Output
_____no_output_____
###Markdown
and our earlier tests also pass:
###Code
assert remove_html_markup("Here's some <strong>strong argument</strong>.") == \
"Here's some strong argument."
assert remove_html_markup('<input type="text" value="<your name>">') == ""
###Output
_____no_output_____
###Markdown
However, the above code still has a bug. In two of these inputs, HTML markup is still not properly stripped:```htmlfoo"foo""foo"foo```Can you guess which ones these are? Again, a simple assertion will reveal the culprits:
###Code
with ExpectError():
assert remove_html_markup('<b>foo</b>') == 'foo'
with ExpectError():
assert remove_html_markup('<b>"foo"</b>') == '"foo"'
with ExpectError():
assert remove_html_markup('"<b>foo</b>"') == '"foo"'
with ExpectError():
assert remove_html_markup('<"b">foo</"b">') == 'foo'
###Output
_____no_output_____
###Markdown
So, unfortunately, we're not done yet – our function still has errors. The Devil's Guide to DebuggingLet us now discuss a couple of methods that do _not_ work well for debugging. (These "devil's suggestions" are adapted from the 1993 book "Code Complete" from Steve McConnell.) Printf DebuggingWhen I was a student, never got any formal training in debugging, so I had to figure this out for myself. What I learned was how to use _debugging output_; in Python, this would be the `print()` function. For instance, I would go and scatter `print()` calls everywhere:
###Code
def remove_html_markup_with_print(s):
tag = False
quote = False
out = ""
for c in s:
print("c =", repr(c), "tag =", tag, "quote =", quote)
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif c == '"' or c == "'" and tag:
quote = not quote
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
This way of inspecting executions is commonly called "Printf debugging", after the C `printf()` function. Then, running this would allow me to see what's going on in my code:
###Code
remove_html_markup_with_print('<b>"foo"</b>')
###Output
c = '<' tag = False quote = False
c = 'b' tag = True quote = False
c = '>' tag = True quote = False
c = '"' tag = False quote = False
c = 'f' tag = False quote = True
c = 'o' tag = False quote = True
c = 'o' tag = False quote = True
c = '"' tag = False quote = True
c = '<' tag = False quote = False
c = '/' tag = True quote = False
c = 'b' tag = True quote = False
c = '>' tag = True quote = False
###Markdown
Yes, one sees what is going on – but this is horribly inefficient! Think of a 1,000-character input – you'd have to go through 2,000 lines of logs. It may help you, but it's a total time waster. Plus, you have to enter these statements, remove them again... it's a maintenance nightmare. (You may even forget printf's in your code, creating a security problem: Mac OS X versions 10.7 to 10.7.3 would log the password in clear because someone had forgotten to turn off debugging output.) Debugging into Existence I would also try to _debug the program into existence._ Just change things until they work. Let me see: If I remove the conditions "and not quote" from the program, it would actually work again:
###Code
def remove_html_markup_without_quotes(s):
tag = False
quote = False
out = ""
for c in s:
if c == '<': # and not quote:
tag = True
elif c == '>': # and not quote:
tag = False
elif c == '"' or c == "'" and tag:
quote = not quote
elif not tag:
out = out + c
return out
assert remove_html_markup_without_quotes('<"b">foo</"b">') == 'foo'
###Output
_____no_output_____
###Markdown
Cool! Unfortunately, the function still fails on the other input:
###Code
with ExpectError():
assert remove_html_markup_without_quotes('<b>"foo"</b>') == '"foo"'
###Output
Traceback (most recent call last):
File "<ipython-input-28-1d8954a52bcf>", line 2, in <module>
assert remove_html_markup_without_quotes('<b>"foo"</b>') == '"foo"'
AssertionError (expected)
###Markdown
So, maybe we can change things again, such that both work? And maybe the other tests we had earlier won't fail? Let's just continue to change things randomly again and again and again. Oh, and of course, I would never back up earlier versions such that I would be able to keep track of what has changed and when. Use the Most Obvious Fix My favorite: Use the most obvious fix. This means that you're fixing the symptom, not the problem. In our case, this would be something like:
###Code
def remove_html_markup_fixed(s):
if s == '<b>"foo"</b>':
return '"foo"'
...
###Output
_____no_output_____
###Markdown
Miracle! Our earlier failing assertion now works! Now we can do the same for the other failing test, too, and we're done.(Rumor has it that some programmers use this technique to get their tests to pass...) Things to do InsteadAs with any devil's guide, you get an idea of how to do things by doing the _opposite._ What this means is:1. Understand the code2. Fix the problem, not the symptom3. Proceed systematicallywhich is what we will apply for the rest of this chapter. From Defect to FailureTo understand how to systematically debug a program, we first have to understand how failures come to be. The typical debugging situation looks like this. We have a program (execution), taking some input and producing some output. The output is in *error* (✘), meaning an unwanted and unintended deviation from what is correct, right, or true.The input, in contrast, is assumed to be correct (✔). (Otherwise, we wouldn't search for the bug in our program, but in whatever produced its input.)
###Code
# ignore
def execution_diagram(show_steps=True, variables=[],
steps=3, error_step=666,
until=666, fault_path=[]):
dot = graph()
dot.node('input', shape='none', fillcolor='white', label=f"Input {PASS}",
fontcolor=PASS_COLOR)
last_outgoing_states = ['input']
for step in range(1, min(steps + 1, until)):
if step == error_step:
step_label = f'Step {step} {FAIL}'
step_color = FAIL_COLOR
else:
step_label = f'Step {step}'
step_color = None
if step >= error_step:
state_label = f'State {step} {FAIL}'
state_color = FAIL_COLOR
else:
state_label = f'State {step} {PASS}'
state_color = PASS_COLOR
state_name = f's{step}'
outgoing_states = []
incoming_states = []
if not variables:
dot.node(name=state_name, shape='box',
label=state_label, color=state_color,
fontcolor=state_color)
else:
var_labels = []
for v in variables:
vpath = f's{step}:{v}'
if vpath in fault_path:
var_label = f'<{v}>{v} ✘'
outgoing_states.append(vpath)
incoming_states.append(vpath)
else:
var_label = f'<{v}>{v}'
var_labels.append(var_label)
record_string = " | ".join(var_labels)
dot.node(name=state_name, shape='record',
label=nohtml(record_string), color=state_color,
fontcolor=state_color)
if not outgoing_states:
outgoing_states = [state_name]
if not incoming_states:
incoming_states = [state_name]
for outgoing_state in last_outgoing_states:
for incoming_state in incoming_states:
if show_steps:
dot.edge(outgoing_state, incoming_state,
label=step_label, fontcolor=step_color)
else:
dot.edge(outgoing_state, incoming_state)
last_outgoing_states = outgoing_states
if until > steps + 1:
# Show output
if error_step > steps:
dot.node('output', shape='none', fillcolor='white',
label=f"Output {PASS}", fontcolor=PASS_COLOR)
else:
dot.node('output', shape='none', fillcolor='white',
label=f"Output {FAIL}", fontcolor=FAIL_COLOR)
for outgoing_state in last_outgoing_states:
label = "Execution" if steps == 0 else None
dot.edge(outgoing_state, 'output', label=label)
display(dot)
# ignore
execution_diagram(show_steps=False, steps=0, error_step=0)
###Output
_____no_output_____
###Markdown
This situation we see above is what we call a *failure*: An externally visible _error_ in the program behavior, with the error again being an unwanted and unintended deviation from what is correct, right, or true. How does this failure come to be? The execution we see above breaks down into several program _states_, one after the other.
###Code
# ignore
for until in range(1, 6):
execution_diagram(show_steps=False, until=until, error_step=2)
###Output
_____no_output_____
###Markdown
Initially, the program state is still correct (✔). However, at some point in the execution, the state gets an _error_, also known as a *fault*. This fault – again an unwanted and unintended deviation from what is correct, right, or true – then propagates along the execution, until it becomes externally visible as a _failure_.(In reality, there are many, many more states than just this, but these would not fit in a diagram.) How does a fault come to be? Each of these program states is produced by a _step_ in the program code. These steps take a state as input and produce another state as output. Technically speaking, the program inputs and outputs are also parts of the program state, so the input flows into the first step, and the output is the state produced by the last step.
###Code
# ignore
for until in range(1, 6):
execution_diagram(show_steps=True, until=until, error_step=2)
###Output
_____no_output_____
###Markdown
Now, in the diagram above, Step 2 gets a _correct_ state as input and produces a _faulty_ state as output. The produced fault then propagates across more steps to finally become visible as a _failure_. The goal of debugging thus is to _search_ for the step in which the state first becomes faulty. The _code_ associated with this step is again an error – an unwanted and unintended deviation from what is correct, right, or true – and is called a _defect_. This is what we have to find – and to fix. Sounds easy, right? Unfortunately, things are not that easy, and that has something to do with the program state. Let us assume our state consists of three variables, `v1` to `v3`, and that Step 2 produces a fault in `v2`. This fault then propagates to the output:
###Code
# ignore
for until in range(1, 6):
execution_diagram(show_steps=True, variables=['v1', 'v2', 'v3'],
error_step=2,
until=until, fault_path=['s2:v2', 's3:v2'])
###Output
_____no_output_____
###Markdown
The way these faults propagate is called a *cause-effect chain*:* The _defect_ in the code _causes_ a fault in the state when executed.* This _fault_ in the state then _propagates_ through further execution steps...* ... until it becomes visible as a _failure_. Since the code was originally written by a human, any defect can be related to some original _mistake_ the programmer made. This gives us a number of terms that all are more precise than the general "error" or the colloquial "bug":* A _mistake_ is a human act or decision resulting in an error.* A _defect_ is an error in the program code. Also called *bug*.* A _fault_ is an error in the program state. Also called *infection*.* A _failure_ is an externally visible error in the program behavior. Also called *malfunction*.The cause-effect chain of events is thus* Mistake → Defect → Fault → ... → Fault → FailureNote that not every defect also causes a failure, which is despite all testing, there can still be defects in the code looming around until the right conditions are met to trigger them. On the other hand, though, _every failure can be traced back to the defect that causes it_. Our job is to break the cause-effect chain. From Failure to DefectTo find a defect from a failure, we _trace back_ the faults along their _propagation_ – that is, we find out which faults in the earlier state have caused the later faults. We start from the very end of the execution and then gradually progress backwards in time, examining fault after fault until we find a _transition_ from a correct state to a faulty state – that is, astep in which a correct state comes in and a faulty state comes out. At this point, we have found the origin of the failure – and the defect that causes it. What sounds like a straight-forward strategy, unfortunately, doesn't always work this way in practice. That is because of the following problems of debugging:* First, program states are actually _large_, encompassing dozens to thousands of variables, possibly even more. If you have to search all of these manually and check them for faults, you will spend a lot of time for a single state.* Second, you do not always know _whether a state is correct or not._ While most programs have some form of specification for their inputs and outputs, these do not necessarily exist for intermediate results. If one had a specification that could check each state for correctness (possibly even automatically), debugging would be trivial. Unfortunately, it is not, and that's partly due to the lack of specifications.* Third, executions typically do not come in a handful of steps, as in the diagrams above; instead, they can easily encompass _thousands to millions of steps._ This means that you will have to examine not just one state, but several, making the problem much worse.To make your search efficient, you thus have to _focus_ your search – starting with most likely causes and gradually progressing to the less probable causes. This is what we call a _debugging strategy_. The Scientific MethodNow that we know how failures come to be, let's look into how to systematically find their causes. What we need is a _strategy_ that helps us search for how and when the failure comes to be. For this, we use a process called the *scientific method*. When we are debugging a program, we are trying to find the causes of a given effect – very much like natural scientists try to understand why things in nature are as they are and how they come to be. Over thousands of years, scientists have conducted _observations_ and _experiments_ to come to an understanding of how our world works. The process by which experimental scientists operate has been coined "The scientific method". This is how it works: 1. Formulate a _question_, as in "Why does this apple fall down?".2. Invent a _hypothesis_ based on knowledge obtained while formulating the question, that may explain the observed behavior. 3. Determining the logical consequences of the hypothesis, formulate a _prediction_ that can _support_ or _refute_ the hypothesis. Ideally, the prediction would distinguish the hypothesis from likely alternatives.4. _Test_ the prediction (and thus the hypothesis) in an _experiment_. If the prediction holds, confidence in the hypothesis increases; otherwise, it decreases.5. Repeat Steps 2–4 until there are no discrepancies between hypothesis and predictions and/or observations. At this point, your hypothesis may be named a *theory* – that is, a predictive and comprehensive description of some aspect of the natural world. The gravitational theory, for instance, predicts very well how the moon revolves around the earth, and how the earth revolves around the sun. Our debugging problems are of a slightly lesser scale – we'd like a theory of how our failure came to be – but the process is pretty much the same.
###Code
# ignore
dot = graph()
dot.node('Hypothesis')
dot.node('Observation')
dot.node('Prediction')
dot.node('Experiment')
dot.edge('Hypothesis', 'Observation',
label="<Hypothesis<BR/>is <I>supported:</I><BR/>Refine it>",
dir='back')
dot.edge('Hypothesis', 'Prediction')
dot.node('Problem Report', shape='none', fillcolor='white')
dot.edge('Problem Report', 'Hypothesis')
dot.node('Code', shape='none', fillcolor='white')
dot.edge('Code', 'Hypothesis')
dot.node('Runs', shape='none', fillcolor='white')
dot.edge('Runs', 'Hypothesis')
dot.node('More Runs', shape='none', fillcolor='white')
dot.edge('More Runs', 'Hypothesis')
dot.edge('Prediction', 'Experiment')
dot.edge('Experiment', 'Observation')
dot.edge('Observation', 'Hypothesis',
label="<Hypothesis<BR/>is <I>rejected:</I><BR/>Seek alternative>")
# ignore
display(dot)
###Output
_____no_output_____
###Markdown
In debugging, we proceed the very same way – indeed, we are treating bugs as if they were natural phenomena. This analogy may sound far-fetched, as programs are anything but natural. Nature, by definition, is not under our control. But bugs are _out of our control just as well._ Hence, the analogy is not that far-fetched – and we can apply the same techniques for debugging. Finding a Hypothesis Let us apply the scientific method to our Python program which removes HTML tags. First of all, let us recall the problem – `remove_html_markup()` works for some inputs, but fails on others.
###Code
for i, html in enumerate(['<b>foo</b>',
'<b>"foo"</b>',
'"<b>foo</b>"',
'<"b">foo</"b">']):
result = remove_html_markup(html)
print("%-2d %-15s %s" % (i + 1, html, result))
###Output
1 <b>foo</b> foo
2 <b>"foo"</b> foo
3 "<b>foo</b>" <b>foo</b>
4 <"b">foo</"b"> foo
###Markdown
Input 1 and 4 work as expected, the others do not. We can write these down in a table, such that we can always look back at our previous results:|Input|Expectation|Output|Outcome||-----|-----------|------|-------||`foo`|`foo`|`foo`|✔||`"foo"`|`"foo"`|`foo`|✘||`"foo"`|`"foo"`|`foo`|✘||`foo`|`foo`|`foo`|✔|
###Code
quiz("From the difference between success and failure,"
" we can already devise some observations about "
" what's wrong with the output."
" Which of these can we turn into general hypotheses?",
["Double quotes are stripped from the tagged input.",
"Tags in double quotes are not stripped.",
"The tag '<b>' is always stripped from the input.",
"Four-letter words are stripped."], [298 % 33, 1234 % 616])
###Output
_____no_output_____
###Markdown
Testing a HypothesisThe hypotheses that remain are:1. Double quotes are stripped from the tagged input.2. Tags in double quotes are not stripped. These may be two separate issues, but chances are they are tied to each other. Let's focus on 1., because it is simpler. Does it hold for all inputs, even untagged ones? Our hypothesis becomes1. Double quotes are stripped from the ~~tagged~~ input. Let's devise an experiment to validate this. If we feed the string```html"foo"```(including the double quotes) into `remove_html_markup()`, we should obtain```html"foo"```as result – that is, the output should be the unchanged input. However, if our hypothesis 1. is correct, we should obtain```htmlfoo```as result – that is, "Double quotes are stripped from the input" as predicted by the hypothesis. We can very easily test this hypothesis:
###Code
remove_html_markup('"foo"')
###Output
_____no_output_____
###Markdown
Our hypothesis is confirmed! We can add this to our list of observations. |Input|Expectation|Output|Outcome||-----|-----------|------|-------||`foo`|`foo`|`foo`|✔||`"foo"`|`"foo"`|`foo`|✘||`"foo"`|`"foo"`|`foo`|✘||`foo`|`foo`|`foo`|✔||`"foo"`|`"foo"`|`foo`|✘| You can try out the hypothesis with more inputs – and it remains valid. Any non-markup input that contains double quotes will have these stripped. Where does that quote-stripping come from? This is where we need to explore the cause-effect chain. The only place in `remove_html_markup()` where quotes are handled is this line:```pythonelif c == '"' or c == "'" and tag: quote = not quote```So, quotes should be removed only if `tag` is set. However, `tag` can be set only if the input contains a markup tag, which is not the case for a simple input like `"foo"`. Hence, what we observe is actually _impossible._ Yet, it happens. Refining a HypothesisDebugging is a game of falsifying assumptions. You assume the code works – it doesn't. You assume the `tag` flag cannot be set – yet it may be. What do we do? Again, we create a hypothesis:1. The error is due to `tag` being set. How do we know whether tag is being set? Let me introduce one of the most powerful debugging tools ever invented, the `assert` statement. The statement```pythonassert cond```evaluates the given condition `cond` and* if it holds: proceed as usual* if `cond` does not hold: throw an exceptionAn `assert` statement _encodes our assumptions_ and as such, should never fail. If it does, well, then something is wrong. Using `assert`, we can check the value of `tag` all through the loop:
###Code
def remove_html_markup_with_tag_assert(s):
tag = False
quote = False
out = ""
for c in s:
assert not tag # <=== Just added
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif c == '"' or c == "'" and tag:
quote = not quote
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
Our expectation is that this assertion would fail. So, do we actually get an exception? Try it out for yourself by uncommenting the following line:
###Code
# remove_html_markup_with_tag_assert('"foo"')
quiz("What happens after inserting the above assertion?",
["The program raises an exception. (i.e., tag is set)",
"The output is as before, i.e., foo without quotes."
" (which means that tag is not set)"],
2)
###Output
_____no_output_____
###Markdown
Here's the solution:
###Code
with ExpectError():
result = remove_html_markup_with_tag_assert('"foo"')
result
###Output
_____no_output_____
###Markdown
Refuting a HypothesisWe did not get an exception, hence we reject our hypothesis:1. ~~The error is due to `tag` being set.~~ Again, let's go back to the only place in our code where quotes are handled:```pythonelif c == '"' or c == "'" and tag: quote = not quote```Because of the assertion, we already know that `tag` is always False. Hence, this condition should never hold either. But maybe there's something wrong with the condition such that it holds? Here's our hypothesis:1. The error is due to the quote condition evaluating to true If the condition evaluates to true, then `quote` should be set. We could now go and assert that `quote` is false; but we only care about the condition. So we insert an assertion that assumes that setting the code setting the `quote` flag is never reached:
###Code
def remove_html_markup_with_quote_assert(s):
tag = False
quote = False
out = ""
for c in s:
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif c == '"' or c == "'" and tag:
assert False # <=== Just added
quote = not quote
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
Our expectation this time again is that the assertion fails. So, do we get an exception this time? Try it out for yourself by uncommenting the following line:
###Code
# remove_html_markup_with_quote_assert('"foo"')
quiz("What happens after inserting the 'assert' tag?",
["The program raises an exception (i.e., the quote condition holds)",
"The output is still foo (i.e., the quote condition does not hold)"], 29 % 7)
###Output
_____no_output_____
###Markdown
Here's what happens now that we have the `assert` tag:
###Code
with ExpectError():
result = remove_html_markup_with_quote_assert('"foo"')
###Output
Traceback (most recent call last):
File "<ipython-input-47-9ce255289291>", line 2, in <module>
result = remove_html_markup_with_quote_assert('"foo"')
File "<ipython-input-44-9c8a53a91780>", line 12, in remove_html_markup_with_quote_assert
assert False # <=== Just added
AssertionError (expected)
###Markdown
From this observation, we can deduce that our hypothesis is _confirmed_:1. The error is due to the quote condition evaluating to true (CONFIRMED)and the _condition is actually faulty._ It evaluates to True although `tag` is always False:```pythonelif c == '"' or c == "'" and tag: quote = not quote```But this condition holds for single and double quotes. Is there a difference? Let us see whether our observations generalize towards general quotes:1. ~~Double~~ quotes are stripped from the input. We can verify these hypotheses with an additional experiment. We go back to our original implementation (without any asserts), and then check it:
###Code
remove_html_markup("'foo'")
###Output
_____no_output_____
###Markdown
Surprise: Our hypothesis is rejected and we can add another observation to our table:|Input|Expectation|Output|Outcome||-----|-----------|------|-------||`'foo'`|`'foo'`|`'foo'`|✔|So, the condition* becomes True when a double quote is seen* becomes False (as it should) with single quotes At this point, you should have enough material to solve the problem. How do we have to fix the condition? Here are four alternatives:```pythonc == "" or c == '' and tag Choice 1c == '"' or c == "'" and not tag Choice 2(c == '"' or c == "'") and tag Choice 3... Something else```
###Code
quiz("How should the condition read?",
["Choice 1", "Choice 2", "Choice 3", "Something else"],
399 % 4)
###Output
_____no_output_____
###Markdown
Fixing the Bug So, you have spotted the defect: In Python (and most other languages), `and` takes precedence over `or`, which is why the condition is wrong. It should read:```python(c == '"' or c == "'") and tag```(Actually, good programmers rarely depend on precedence; it is considered good style to use parentheses lavishly.) So, our hypothesis now has become1. The error is due to the `quote` condition evaluating to True Is this our final hypothesis? We can check our earlier examples whether they should now work well:|Input|Expectation|Output|Outcome||-----|-----------|------|-------||`foo`|`foo`|`foo`|✔||`"foo"`|`"foo"`|`foo`|✘||`"foo"`|`"foo"`|`foo`|✘||`foo`|`foo`|`foo`|✔||`"foo"`|`'foo'`|`foo`|✘||`'foo'`|`'foo'`|`'foo'`|✔|In all of these examples, the `quote` flag should now be set outside of tags; hence, everything should work as expected. In terms of the scientific process, we now have a *theory* – a hypothesis that* is consistent with all earlier observations* predicts future observations (in our case: correct behavior)For debugging, our problems are usually too small for a big word like theory, so we use the word *diagnosis* instead. So is our diagnosis sufficient to fix the bug? Let us check. Checking DiagnosesIn debugging, you should start to fix your code if and only if you have a diagnosis that shows two things:1. **Causality.** Your diagnosis should explain why and how the failure came to be. Hence, it induces a _fix_ that, when applied, should make the failure disappear.2. **Incorrectness.** Your diagnosis should explain why and how the code is _incorrect_ (which in turn suggests how to _correct_ the code). Hence, the fix it induces not only applies to the given failure, but also to all related failures. Showing both these aspects requirements – _causality_ and _incorrectness_ – are crucial for a debugging diagnosis:* If you find that you can change some location to make the failure go away, but are not sure why this location is wrong, then your "fix" may apply only to the symptom rather than the source. Your diagnosis explains _causality_, but not _incorrectness_.* If you find that there is a defect in some code location, but do not verify whether this defect is related to the failure in question, then your "fix" may not address the failure. Your diagnosis addresses _incorrectness_, but not _causality_. When you do have a diagnosis that explains both causality (how the failure came to be), and incorrectness (how to correct the code accordingly), then (and only then!) is it time to actually _fix_ the code accordingly. After applying the fix, the failure should be gone, and no other failure should occur. If the failure persists, this should come as a surprise. Obviously, there is some other aspect that you haven't considered yet, so you have to go back to the drawing board and add another failing test case to the set of observations. Fixing the Code All these things considered, let us go and fix `remove_html_markup()`. We know how the defect _causes_ the failure (by erroneously setting `quote` outside of tags). We know that the line in question is _incorrect_ (as single and double of quotes should be treated similarly). So, our diagnosis shows both causality and incorrectness, and we can go and fix the code accordingly:
###Code
def remove_html_markup(s):
tag = False
quote = False
out = ""
for c in s:
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif (c == '"' or c == "'") and tag: # <-- FIX
quote = not quote
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
We verify that the fix was successful by running our earlier tests. Not only should the previously failing tests now pass, the previously passing tests also should not be affected. Fortunately, all tests now pass:
###Code
assert remove_html_markup("Here's some <strong>strong argument</strong>.") == \
"Here's some strong argument."
assert remove_html_markup(
'<input type="text" value="<your name>">') == ""
assert remove_html_markup('<b>foo</b>') == 'foo'
assert remove_html_markup('<b>"foo"</b>') == '"foo"'
assert remove_html_markup('"<b>foo</b>"') == '"foo"'
assert remove_html_markup('<"b">foo</"b">') == 'foo'
###Output
_____no_output_____
###Markdown
So, our hypothesis _was_ a theory, and our diagnosis was correct. Success! Alternate PathsA defect may have more than one hypothesis, and each diagnosis can be obtained by many ways. We could also have started with our other hypothesis2. Tags in double quotes are not strippedand by reasoning and experiments, we would have reached the same conclusion that the condition is faulty:* To strip tags, the `tag` flag must be set (but it is not).* To set the `tag` flag, the `quote` variable must not be set (but it is).* The `quote` flag is set under the given condition (which thus must be faulty).This gets us to the same diagnosis as above – and, of course, the same fix. Homework after the Fix After having successfully validated the fix, we still have some homework to make. Check for further Defect Occurrences First, we may want to check that the underlying mistake was not made elsewhere, too.For an error as with `remove_html_markup()`, it may be wise to check other parts of the code (possibly written by the same programmer) whether Boolean formulas show proper precendence. Consider setting up a static program checker or style checker to catch similar mistakes. Check your TestsIf the defect was not found through testing, now is a good time to make sure it will be found the next time. If you use automated tests, add a test that catches the bug (as well as similar ones), such that you can prevent regressions. Add Assertions To be 100% sure, we could add an assertion to `remove_html_markup()` that checks the final result for correctness. Unfortunately, writing such an assertion is just as complex as writing the function itself.There is one assertion, though, which could be placed in the loop body to catch this kind of errors, and which could remain in the code. Which is it?
###Code
quiz("Which assertion would have caught the problem?",
["assert quote and not tag",
"assert quote or not tag",
"assert tag or not quote",
"assert tag and not quote"],
3270 - 3267)
###Output
_____no_output_____
###Markdown
Indeed, the statement```pythonassert tag or not quote```is correct. This excludes the situation of ¬`tag` ∧ `quote` – that is, the `tag` flag is not set, but the `quote` flag is. If you remember our state machine from above, this is actually a state that should never exist:
###Code
# ignore
display(state_machine)
###Output
_____no_output_____
###Markdown
Here's our function in its "final" state. As software goes, software is never final – and this may also hold for our function, as there is still room for improvement. For this chapter though, we leave it be.
###Code
def remove_html_markup(s):
tag = False
quote = False
out = ""
for c in s:
assert tag or not quote
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif (c == '"' or c == "'") and tag:
quote = not quote
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
Commit the Fix It may sound obvious, but your fix is worth nothing if it doesn't go into production. Be sure to commit your change to the code repository, together with your diagnosis. If your fix has to be approved by a third party, a good diagnosis on why and what happened is immensely helpful. Close the Bug ReportIf you [systematically track bugs](Tracking.ipynb), and your bug is properly tracked, now is the time to mark the issue as "resolved". Check for duplicates of the issue and check whether they are resolved, too. And now, you are finally done:![](https://media.giphy.com/media/nbJUuYFI6s0w0/giphy.gif)Time to relax – and look for the next bug! Become a Better DebuggerWe have now systematically fixed a bug. In this book, we will explore a number of techniques to make debugging easier – coming up with automated diagnoses, explanations, even automatic repairs, including for our example above. But there are also number of things _you_ can do to become a better debugger. Follow the ProcessIf you're an experienced programmer, you may have spotted the problem in `remove_html_markup()` immediately, and start fixing the code right away. But this is dangerous and risky.Why is this so? Well, because you should first* try to understand the problem, and * have a full diagnosis before starting to fix away.You _can_ skip these steps, and jump right to your interactive debugger the very moment you see a failure, happily stepping through their program. This may even work well for simple problems, including this one. The risk, however, is that this narrows your view to just this one execution, which limits your ability to understand _all_ the circumstances of the problem. Even worse: If you start "fixing" the bug without exactly understanding the problem, you may end up with an incomplete solution – as illustrated in "The Devil's Guide to Debugging", above. Keep a LogA second risk of starting debugging too soon is that it lets you easily deviate from a systematic process. Remember how we wrote down every experiment in a table? How we numbered every hypothesis? This is not just for teaching. Writing these things down explicitly allow you to keep track of all your observations and hypotheses over time.|Input|Expectation|Output|Outcome||-----|-----------|------|-------||`foo`|`foo`|`foo`|✔|Every time you come up with a new hypothesis, you can immediately check it against your earlier observations, which will help you eliminating unlikely ones from the start. This is a bit like in the classic "Mastermind" board game, in which you have to guess some secret combination of pins, and in which you opponent gives you hints on whether and how your guesses are correct. At any time, you can see your previous guesses (experiments) and the results (observations) you got; any new guess (hypothesis) as to be consistent with the previous observations and experiments. ![Master Mind Board Grame](https://upload.wikimedia.org/wikipedia/commons/2/2d/Mastermind.jpg) Keeping such a log also allows you to interrupt your debugging session at any time. You can be home in time, sleep over the problem, and resume the next morning with a refreshed mind. You can even hand over the log to someone else, stating your findings so far.The alternative to having a log is to _keep all in memory_. This only works for short amounts of time, as puts a higher and higher cognitive load on your memory as you debug along. After some time, you will forget earlier observations, which leads to mistakes. Worst of all, any interruption will break your concentration and make you forget things, so you can't stop debugging until you're done.Sure, if you are a real master, you can stay glued to the screen all night. But I'd rather be home in time, thank you. RubberduckingA great technique to revisit your observations and to come up with new hypotheses is to _explain the problem to someone else_. In this process, the "someone else" is important, but even more important is that _you are explaining the problem to yourself_! As Kernighan and Pike \cite{Kernighan1999} put it:> Sometimes it takes no more than a few sentences, followed by an embarrassed "Never mind. I see what's wrong. Sorry to bother you."The reason why this works is that teaching someone else forces you to take different perspectives, and these help you resolving the inconsistency between what you assume and what you actually observe.Since that "someone else" can be totally passive, you can even replace her with an inanimate object to talk to – even a rubber duck. This technique is called *rubber duck debugging* or *rubberducking* – the idea is that you explain your problem to a rubber duck first before interrupting one of your co-workers with the problem. Some programmers, when asked for advice, explicitly request that you "explain your problem to the duck first", knowing that this resolves a good fraction of problems. ![Rubber duck debugging](https://upload.wikimedia.org/wikipedia/commons/d/d5/Rubber_duck_assisting_with_debugging.jpg) The Cost of Debugging\todo{add recent stuff on how much time debugging takes}And it's not only that debugging takes time – the worst thing is that it is a search process, which can take anything between a few minutes and several hours, sometimes even days and weeks. But even if you never know how much time a bug will take, it's a bit of blessing to use a process which gradually gets you towards its cause. History of DebuggingEngineers and programmers have long used the term "bug" for faults in their systems – as if it were something that crept into an otherwise flawless program to cause the effects that none could explain. And from a psychological standpoint, it is far easier to blame some "bug" rather than taking responsibility ourselves. In the end, though, we have to face the fact: We made the bugs, and they are ours to fix.Having said that, there has been one recorded instance where a real bug has crept into a system. That was on September 9, 1947, when a moth got stuck in the relay of a Harvard Mark II machine. This event was logged, and the log book is now on display at the Smithsonian Natural Museum of American History, as "First actual case of bug being found." ![First actual case of bug being found](https://upload.wikimedia.org/wikipedia/commons/f/ff/First_Computer_Bug%2C_1945.jpg) The actual term "bug", however, is much older. What do you think is its origin?
###Code
import hashlib
bughash = hashlib.md5(b"debug").hexdigest()
quiz('Where has the name "bug" been used to denote disruptive events?',
[
'In the early days of Morse telegraphy, referring to a special key '
'that would send a string of dots',
'Among radio technicians to describe a device that '
'converts electromagnetic field variations into acoustic signals',
"In Shakespeare's " '"Henry VI", referring to a walking spectre',
'In Middle English, where the word "bugge" is the basis for terms '
'like "bugbear" and "bugaboo"'
],
[bughash.index(i) for i in "d42f"]
)
###Output
_____no_output_____
###Markdown
(Source: \cite{jargon}, \cite{wikipedia:debugging}) Synopsis In this chapter, we introduce some basics of how failures come to be as well as a general process for debugging. Lessons Learned1. An _error_ is a deviation from what is correct, right, or true. Specifically, * A _mistake_ is a human act or decision resulting in an error. * A _defect_ is an error in the program code. Also called *bug*. * A _fault_ is an error in the program state. Also called *infection*. * A _failure_ is an externally visible error in the program behavior. Also called *malfunction*.2. In a failing program execution, a mistake by the programmer results in a defect in the code, which creates a fault in the state, which propagates until it results in a failure. Tracing back fault propagation allows to identify the defect that causes the failure.3. In debugging, the _scientific method_ allows to systematically identify failure causes by gradually refining and refuting hypotheses based on experiments and observations.4. Before fixing the defect, have a complete _diagnosis_ that * shows _causality_ (how the defect causes the failure) * shows _incorrectness_ (how the defect is wrong)5. You can become a better debugger by * Following a systematic process like the scientific method * Keeping a log of your observations and hypotheses * Making your observations and conclusions explicit by telling them somebody (or something). Next StepsIn the next chapters, we will learn how to* [trace and observe executions](Tracer.ipynb)* [build your own interactive debugger](Debugger.ipynb)* [locate defects automatically by correlating failures and code coverage](StatisticalDebugger.ipynb)* [identify and simplify failure-inducing inputs](Reducer.ipynb)Enjoy! BackgroundThere are several good books on debugging, but these three are especially recommended:* _Debugging_ by Agans \cite{agans2006-debugging} takes a pragmatic approach to debugging, highlighting systematic approaches that help for all kinds of application-specific problems;* _Why Programs Fail_ by Zeller \cite{zeller2009-why-programs-fail} takes a more academic approach, creating theories of how failures come to be and systematic debugging processes;* _Effective Debugging_ by Spinellis \cite{spinellis2016-effective-debugging} aims for a middle ground between the two, creating general recipes and recommendations that easily instantiate towards specific problems.All these books focus on _manual_ debugging and the debugging process, just like this chapter; for _automated_ debugging, simply read on :-) Exercises Exercise 1: Get Acquainted with Notebooks and PythonYour first exercise in this book is to get acquainted with notebooks and Python, such that you can run the code examples in the book – and try out your own. Here are a few tasks to get you started. Beginner Level: Run Notebooks in Your BrowserThe easiest way to get access to the code is to run them in your browser.1. From the [Web Page](__CHAPTER_HTML__), check out the menu at the top. Select `Resources` $\rightarrow$ `Edit as Notebook`.2. After a short waiting time, this will open a Jupyter Notebook right within your browser, containing the current chapter as a notebook.3. You can again scroll through the material, but you click on any code example to edit and run its code (by entering Shift + Return). You can edit the examples as you please.4. Note that code examples typically depend on earlier code, so be sure to run the preceding code first.5. Any changes you make will not be saved (unless you save your notebook to disk).For help on Jupyter Notebooks, from the [Web Page](__CHAPTER_HTML__), check out the `Help` menu. Advanced Level: Run Python Code on Your MachineThis is useful if you want to make greater changes, but do not want to work with Jupyter.1. From the [Web Page](__CHAPTER_HTML__), check out the menu at the top. Select `Resources` $\rightarrow$ `Download Code`. 2. This will download the Python code of the chapter as a single Python .py file, which you can save to your computer.3. You can then open the file, edit it, and run it in your favorite Python environment to re-run the examples.4. Most importantly, you can [import it](Importing.ipynb) into your own code and reuse functions, classes, and other resources.For help on Python, from the [Web Page](__CHAPTER_HTML__), check out the `Help` menu. Pro Level: Run Notebooks on Your MachineThis is useful if you want to work with Jupyter on your machine. This will allow you to also run more complex examples, such as those with graphical output.1. From the [Web Page](__CHAPTER_HTML__), check out the menu at the top. Select `Resources` $\rightarrow$ `All Notebooks`. 2. This will download all Jupyter Notebooks as a collection of .ipynb files, which you can save to your computer.3. You can then open the notebooks in Jupyter Notebook or Jupyter Lab, edit them, and run them. To navigate across notebooks, open the notebook [`00_Table_of_Contents.ipynb`](00_Table_of_Contents.ipynb).4. You can also download individual notebooks using Select `Resources` $\rightarrow$ `Download Notebook`. Running these, however, will require that you have the other notebooks downloaded already.For help on Jupyter Notebooks, from the [Web Page](__CHAPTER_HTML__), check out the `Help` menu. Boss Level: Contribute!This is useful if you want to contribute to the book with patches or other material. It also gives you access to the very latest version of the book.1. From the [Web Page](__CHAPTER_HTML__), check out the menu at the top. Select `Resources` $\rightarrow$ `Project Page`. 2. This will get you to the GitHub repository which contains all sources of the book, including the latest notebooks.3. You can then _clone_ this repository to your disk, such that you get the latest and greatest.4. You can report issues and suggest pull requests on the GitHub page.5. Updating the repository with `git pull` will get you updated.If you want to contribute code or text, check out the [Guide for Authors](Guide_for_Authors.ipynb). Exercise 2: More Bugs!You may have noticed that our `remove_html_markup()` function is still not working perfectly under all circumstances. The error has something to do with different quotes occurring in the input. Part 1: Find the ProblemWhat does the problem look like? Set up a test case that demonstrates the problem.
###Code
assert(...)
###Output
_____no_output_____
###Markdown
Set up additional test cases as useful. **Solution.** The remaining problem stems from the fact that in `remove_html_markup()`, we do not differentiate between single and double quotes. Hence, if we have a _quote within a quoted text_, the function may get confused. Notably, a string that begins with a double quote may be interpreted as ending when a single quote is seen, and vice versa. Here's an example of such a string:```html">foo``` When we remove the HTML markup, the `>` in the string is interpreted as _unquoted_. Hence, it is interpreted as ending the tag, such that the rest of the tag is not removed.
###Code
s = '<b title="<Shakespeare' + "'s play>" + '">foo</b>'
s
remove_html_markup(s)
with ExpectError():
assert(remove_html_markup(s) == "foo")
###Output
Traceback (most recent call last):
File "<ipython-input-60-00bc84e50798>", line 2, in <module>
assert(remove_html_markup(s) == "foo")
AssertionError (expected)
###Markdown
Part 2: Identify Extent and CauseUsing the scientific method, identify the extent and cause of the problem. Write down your hypotheses and log your observations, as in|Input|Expectation|Output|Outcome||-----|-----------|------|-------||(input)|(expectation)|(output)|(outcome)| **Solution.** The first step is obviously|Input|Expectation|Output|Outcome||-----|-----------|------|-------||">foo|foo|"foo|✘| Part 3: Fix the ProblemDesign a fix for the problem. Show that it satisfies the earlier tests and does not violate any existing test. **Solution**. Here's an improved implementation that actually tracks the opening and closing quote by storing the quoting character in the `quote` variable. (If `quote` is `''`, we are not in a string.)
###Code
def remove_html_markup_with_proper_quotes(s):
tag = False
quote = ''
out = ""
for c in s:
assert tag or quote == ''
if c == '<' and quote == '':
tag = True
elif c == '>' and quote == '':
tag = False
elif (c == '"' or c == "'") and tag and quote == '':
# beginning of string
quote = c
elif c == quote:
# end of string
quote = ''
elif not tag:
out = out + c
return out
###Output
_____no_output_____
###Markdown
Python enthusiasts may note that we could also write `not quote` instead of `quote == ''`, leaving most of the original code untouched. We stick to classic Boolean comparisons here. The function now satisfies the earlier failing test:
###Code
assert(remove_html_markup_with_proper_quotes(s) == "foo")
###Output
_____no_output_____
###Markdown
as well as all our earlier tests:
###Code
assert remove_html_markup_with_proper_quotes(
"Here's some <strong>strong argument</strong>.") == \
"Here's some strong argument."
assert remove_html_markup_with_proper_quotes(
'<input type="text" value="<your name>">') == ""
assert remove_html_markup_with_proper_quotes('<b>foo</b>') == 'foo'
assert remove_html_markup_with_proper_quotes('<b>"foo"</b>') == '"foo"'
assert remove_html_markup_with_proper_quotes('"<b>foo</b>"') == '"foo"'
assert remove_html_markup_with_proper_quotes('<"b">foo</"b">') == 'foo'
###Output
_____no_output_____ |
glue_examples/MNLI.ipynb | ###Markdown
MNLI : Multi-Genre Natural Language InferenceThe Multi-Genre Natural Language Inference(MNLI) task is a sentence pair classification task. It consists of crowdsourced sentence pairs with textual entailment annotations.See [webisite](http://www.nyu.edu/projects/bowman/multinli/) and [paper](http://www.nyu.edu/projects/bowman/multinli/paper.pdf) for more info.
###Code
import numpy as np
import pandas as pd
import os
import sys
import csv
from sklearn import metrics
from sklearn.metrics import classification_report
sys.path.append("../")
from bert_sklearn import BertClassifier
DATADIR = os.getcwd() + '/glue_data'
#DATADIR = '/data/glue_data'
%%time
%%bash
python3 download_glue_data.py --data_dir glue_data --tasks MNLI
"""
MNLI train data size: 392702
MNLI dev_matched data size: 9815
MNLI dev_mismatched data size: 9832
"""
def read_tsv(filename,quotechar=None):
with open(filename, "r", encoding='utf-8') as f:
return list(csv.reader(f,delimiter="\t",quotechar=quotechar))
def get_mnli_df(filename):
rows = read_tsv(filename)
df=pd.DataFrame(rows[1:],columns=rows[0])
df=df[['sentence1','sentence2','gold_label']]
df.columns=['text_a','text_b','label']
df = df[pd.notnull(df['label'])]
return df
def get_mnli_data(train_file = DATADIR + '/MNLI/train.tsv',
dev_matched_file = DATADIR + '/MNLI/dev_matched.tsv',
dev_mismatched_file = DATADIR + '/MNLI/dev_mismatched.tsv'):
train = get_mnli_df(train_file)
print("MNLI train data size: %d "%(len(train)))
dev_matched = get_mnli_df(dev_matched_file)
print("MNLI dev_matched data size: %d "%(len(dev_matched)))
dev_mismatched = get_mnli_df(dev_mismatched_file)
print("MNLI dev_mismatched data size: %d "%(len(dev_mismatched)))
label_list = np.unique(train['label'].values)
return train,dev_matched,dev_mismatched,label_list
train,dev_matched,dev_mismatched,label_list = get_mnli_data()
print(label_list)
train.head()
dev_matched.head()
dev_mismatched.head()
%%time
#nrows = 1000
#train = train.sample(nrows)
#dev_mismatched = dev_mismatched.sample(nrows)
#dev_matched = dev_matched.sample(nrows)
X_train = train[['text_a','text_b']]
y_train = train['label']
# define model
model = BertClassifier()
model.epochs = 4
model.learning_rate = 3e-5
model.max_seq_length = 128
model.validation_fraction = 0.05
print('\n',model,'\n')
# fit model
model.fit(X_train, y_train)
# score model on dev_matched
test = dev_matched
X_test = test[['text_a','text_b']]
y_test = test['label']
m_accy=model.score(X_test, y_test)
# score model on dev_mismatched
test = dev_mismatched
X_test = test[['text_a','text_b']]
y_test = test['label']
mm_accy=model.score(X_test, y_test)
print("Matched/mismatched accuracy: %0.2f/%0.2f %%"%(m_accy,mm_accy))
###Output
Building sklearn classifier...
BertClassifier(bert_model='bert-base-uncased', epochs=4, eval_batch_size=8,
fp16=False, gradient_accumulation_steps=1, label_list=None,
learning_rate=3e-05, local_rank=-1, logfile='bert.log',
loss_scale=0, max_seq_length=128, num_mlp_hiddens=500,
num_mlp_layers=0, random_state=42, restore_file=None,
train_batch_size=32, use_cuda=True, validation_fraction=0.05,
warmup_proportion=0.1)
Loading bert-base-uncased model...
Defaulting to linear classifier/regressor
train data size: 373067, validation data size: 19635
###Markdown
with MLP...
###Code
%%time
#nrows = 1000
#train = train.sample(nrows)
#dev_mismatched = dev_mismatched.sample(nrows)
#dev_matched = dev_matched.sample(nrows)
X_train = train[['text_a','text_b']]
y_train = train['label']
# define model
model = BertClassifier()
model.epochs = 4
model.learning_rate = 3e-5
model.max_seq_length = 128
model.validation_fraction = 0.05
model.num_mlp_layers = 4
print('\n',model,'\n')
# fit model
model.fit(X_train, y_train)
# score model on dev_matched
test = dev_matched
X_test = test[['text_a','text_b']]
y_test = test['label']
m_accy=model.score(X_test, y_test)
# score model on dev_mismatched
test = dev_mismatched
X_test = test[['text_a','text_b']]
y_test = test['label']
mm_accy=model.score(X_test, y_test)
print("Matched/mismatched accuracy: %0.2f/%0.2f %%"%(m_accy,mm_accy))
###Output
Building sklearn classifier...
BertClassifier(bert_model='bert-base-uncased', epochs=4, eval_batch_size=8,
fp16=False, gradient_accumulation_steps=1, label_list=None,
learning_rate=3e-05, local_rank=-1, logfile='bert.log',
loss_scale=0, max_seq_length=128, num_mlp_hiddens=500,
num_mlp_layers=4, random_state=42, restore_file=None,
train_batch_size=32, use_cuda=True, validation_fraction=0.05,
warmup_proportion=0.1)
Loading bert-base-uncased model...
Using mlp with D=768,H=500,K=3,n=4
train data size: 373067, validation data size: 19635
|
weak_recsys_dianping.ipynb | ###Markdown
Weakly Supervised Recommendation Systems Experiments steps: 1. **User's Preferences Model**: Leverage the most *explicit* ratings to build a *rate/rank prediction model*. This is a simple *Explicit Matrix Factorization* model. 2. **Generate Weak DataSet**: Use the above model to *predict* for all user/item pairs $(u,i)$ in *implicit feedback dataset* to build a new *weak explicit dataset* $(u, i, r^*)$. 3. **Evaluate**: Use the intact test split in the most explicit feedback, in order to evaluate the performance of any model. Explicit Model Experiments This section contains all the experiments based on the explicit matrix factorization model. Explicit Rate Model
###Code
import utils
from spotlight.evaluation import rmse_score
dataset_recommend_train, dataset_recommend_test, dataset_recommend_dev, dataset_implicit = utils.parse_dianping()
print('Explicit dataset (TEST) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_test.ratings), ','),
format(dataset_recommend_test.num_users, ','),
format(dataset_recommend_test.num_items, ',')))
print('Explicit dataset (VALID) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_dev.ratings), ','),
format(dataset_recommend_dev.num_users, ','),
format(dataset_recommend_dev.num_items, ',')))
print('Explicit dataset (TRAIN) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_train.ratings), ','),
format(dataset_recommend_train.num_users, ','),
format(dataset_recommend_train.num_items, ',')))
print('Implicit dataset (READ/READING/TAG/COMMENT) contains %s interactions of %s users and %s items'%(
format(len(dataset_implicit.ratings), ','),
format(dataset_implicit.num_users, ','),
format(dataset_implicit.num_items, ',')))
# train the explicit model based on recommend feedback
model = utils.train_explicit(train_interactions=dataset_recommend_train,
valid_interactions=dataset_recommend_dev,
run_name='model_dianping_explicit_rate')
# evaluate the new model
mrr, ndcg, ndcg10, ndcg_5, mmap, success_10, success_5 = utils.evaluate(interactions=dataset_recommend_test,
model=model,
topk=20)
rmse = rmse_score(model=model, test=dataset_recommend_test)
print('-'*20)
print('RMSE: {:.4f}'.format(rmse))
print('MRR: {:.4f}'.format(mrr))
print('nDCG: {:.4f}'.format(ndcg))
print('nDCG@10: {:.4f}'.format(ndcg10))
print('nDCG@5: {:.4f}'.format(ndcg_5))
print('MAP: {:.4f}'.format(mmap))
print('success@10: {:.4f}'.format(success_10))
print('success@5: {:.4f}'.format(success_5))
###Output
Explicit dataset (TEST) contains 2,679 interactions of 2,115 users and 12,890 items
Explicit dataset (VALID) contains 2,680 interactions of 2,115 users and 12,890 items
Explicit dataset (TRAIN) contains 21,433 interactions of 2,115 users and 12,890 items
Implicit dataset (READ/READING/TAG/COMMENT) contains 211,194 interactions of 2,115 users and 12,890 items
--------------------
RMSE: 0.4332
MRR: 0.0102
nDCG: 0.0204
nDCG@10: 0.0093
nDCG@5: 0.0029
MAP: 0.0067
success@10: 0.0364
success@5: 0.0087
###Markdown
Remove valid/test ratings
###Code
test_interact = set()
for (uid, iid) in zip(dataset_recommend_test.user_ids, dataset_recommend_test.item_ids):
test_interact.add((uid, iid))
for (uid, iid) in zip(dataset_recommend_dev.user_ids, dataset_recommend_dev.item_ids):
test_interact.add((uid, iid))
# clean implicit dataset from test/dev rating
for idx, (uid, iid, r) in enumerate(zip(dataset_implicit.user_ids, dataset_implicit.item_ids, dataset_implicit.ratings)):
if (uid, iid) in test_interact:
dataset_implicit.ratings[idx] = -1
###Output
_____no_output_____
###Markdown
Explicit Read/Reading/Tag/Comment Model Leverage the **explicit rate model** trained at the previous section to annotate **missing values** in the **read/reading/tag/comment** dataset.
###Code
# annotate the missing values in the play dataset based on the explicit recommend model
dataset_implicit = utils.annotate(interactions=dataset_implicit,
model=model,
run_name='dataset_dianping_explicit_annotated')
# train the explicit model based on recommend feedback
model = utils.train_explicit(train_interactions=dataset_implicit,
valid_interactions=dataset_recommend_dev,
run_name='model_dianping_explicit_read')
# evaluate the new model
mrr, ndcg, ndcg10, ndcg_5, mmap, success_10, success_5 = utils.evaluate(interactions=dataset_recommend_test,
model=model,
topk=20)
rmse = rmse_score(model=model, test=dataset_recommend_test)
print('-'*20)
print('RMSE: {:.4f}'.format(rmse))
print('MRR: {:.4f}'.format(mrr))
print('nDCG: {:.4f}'.format(ndcg))
print('nDCG@10: {:.4f}'.format(ndcg10))
print('nDCG@5: {:.4f}'.format(ndcg_5))
print('MAP: {:.4f}'.format(mmap))
print('success@10: {:.4f}'.format(success_10))
print('success@5: {:.4f}'.format(success_5))
###Output
epoch 1 start at: Tue Apr 23 09:07:43 2019
epoch 1 end at: Tue Apr 23 09:07:44 2019
RMSE: 0.4632
epoch 2 start at: Tue Apr 23 09:07:44 2019
epoch 2 end at: Tue Apr 23 09:07:45 2019
RMSE: 0.4592
epoch 3 start at: Tue Apr 23 09:07:46 2019
epoch 3 end at: Tue Apr 23 09:07:47 2019
RMSE: 0.4567
epoch 4 start at: Tue Apr 23 09:07:47 2019
epoch 4 end at: Tue Apr 23 09:07:48 2019
RMSE: 0.4557
epoch 5 start at: Tue Apr 23 09:07:49 2019
epoch 5 end at: Tue Apr 23 09:07:50 2019
RMSE: 0.4525
epoch 6 start at: Tue Apr 23 09:07:50 2019
epoch 6 end at: Tue Apr 23 09:07:52 2019
RMSE: 0.4505
epoch 7 start at: Tue Apr 23 09:07:52 2019
epoch 7 end at: Tue Apr 23 09:07:53 2019
RMSE: 0.4515
--------------------
RMSE: 0.4446
MRR: 0.0309
nDCG: 0.0609
nDCG@10: 0.0359
nDCG@5: 0.0132
MAP: 0.0228
success@10: 0.1310
success@5: 0.0386
###Markdown
Implicit Model Experiments This section contains all the experiments based on the implicit matrix factorization model. Implicit Model using Negative Sampling
###Code
import utils
from spotlight.evaluation import rmse_score
dataset_recommend_train, dataset_recommend_test, dataset_recommend_dev, dataset_implicit = utils.parse_dianping()
print('Explicit dataset (TEST) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_test.ratings), ','),
format(dataset_recommend_test.num_users, ','),
format(dataset_recommend_test.num_items, ',')))
print('Explicit dataset (VALID) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_dev.ratings), ','),
format(dataset_recommend_dev.num_users, ','),
format(dataset_recommend_dev.num_items, ',')))
print('Explicit dataset (TRAIN) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_train.ratings), ','),
format(dataset_recommend_train.num_users, ','),
format(dataset_recommend_train.num_items, ',')))
print('Implicit dataset (READ/READING/TAG/COMMENT) contains %s interactions of %s users and %s items'%(
format(len(dataset_implicit.ratings), ','),
format(dataset_implicit.num_users, ','),
format(dataset_implicit.num_items, ',')))
# train the explicit model based on recommend feedback
model = utils.train_implicit_negative_sampling(train_interactions=dataset_implicit,
valid_interactions=dataset_recommend_dev,
run_name='model_dianping_implicit_read2')
# evaluate the new model
mrr, ndcg, ndcg10, ndcg_5, mmap, success_10, success_5 = utils.evaluate(interactions=dataset_recommend_test,
model=model,
topk=20)
rmse = rmse_score(model=model, test=dataset_recommend_test)
print('-'*20)
print('RMSE: {:.4f}'.format(rmse))
print('MRR: {:.4f}'.format(mrr))
print('nDCG: {:.4f}'.format(ndcg))
print('nDCG@10: {:.4f}'.format(ndcg10))
print('nDCG@5: {:.4f}'.format(ndcg_5))
print('MAP: {:.4f}'.format(mmap))
print('success@10: {:.4f}'.format(success_10))
print('success@5: {:.4f}'.format(success_5))
###Output
Explicit dataset (TEST) contains 2,679 interactions of 2,115 users and 12,890 items
Explicit dataset (VALID) contains 2,680 interactions of 2,115 users and 12,890 items
Explicit dataset (TRAIN) contains 21,433 interactions of 2,115 users and 12,890 items
Implicit dataset (READ/READING/TAG/COMMENT) contains 211,194 interactions of 2,115 users and 12,890 items
epoch 1 start at: Sat Apr 20 11:05:48 2019
epoch 1 end at: Sat Apr 20 11:05:49 2019
MRR: 0.0455
epoch 2 start at: Sat Apr 20 11:05:54 2019
epoch 2 end at: Sat Apr 20 11:05:55 2019
MRR: 0.0453
--------------------
RMSE: 4.0115
MRR: 0.0559
nDCG: 0.0586
nDCG@10: 0.0474
nDCG@5: 0.0342
MAP: 0.0337
success@10: 0.1289
success@5: 0.0692
###Markdown
Popularity
###Code
import utils
from popularity import PopularityModel
from spotlight.evaluation import rmse_score
dataset_recommend_train, dataset_recommend_test, dataset_recommend_dev, dataset_implicit = utils.parse_dianping()
print('Explicit dataset (TEST) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_test.ratings), ','),
format(dataset_recommend_test.num_users, ','),
format(dataset_recommend_test.num_items, ',')))
print('Explicit dataset (VALID) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_dev.ratings), ','),
format(dataset_recommend_dev.num_users, ','),
format(dataset_recommend_dev.num_items, ',')))
print('Explicit dataset (TRAIN) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_train.ratings), ','),
format(dataset_recommend_train.num_users, ','),
format(dataset_recommend_train.num_items, ',')))
print('Implicit dataset (READ/READING/TAG/COMMENT) contains %s interactions of %s users and %s items'%(
format(len(dataset_implicit.ratings), ','),
format(dataset_implicit.num_users, ','),
format(dataset_implicit.num_items, ',')))
# train the explicit model based on recommend feedback
model = PopularityModel()
print('fit the model')
model.fit(interactions=dataset_recommend_train)
# evaluate the new model
print('evaluate the model')
mrr, ndcg, ndcg10, ndcg_5, mmap, success_10, success_5 = utils.evaluate(interactions=dataset_recommend_test,
model=model,
topk=20)
# rmse = rmse_score(model=model, test=dataset_recommend_test, batch_size=512)
# print('-'*20)
# print('RMSE: {:.4f}'.format(rmse))
print('MRR: {:.4f}'.format(mrr))
print('nDCG: {:.4f}'.format(ndcg))
print('nDCG@10: {:.4f}'.format(ndcg10))
print('nDCG@5: {:.4f}'.format(ndcg_5))
print('MAP: {:.4f}'.format(mmap))
print('success@10: {:.4f}'.format(success_10))
print('success@5: {:.4f}'.format(success_5))
###Output
Explicit dataset (TEST) contains 2,679 interactions of 2,115 users and 12,890 items
Explicit dataset (VALID) contains 2,680 interactions of 2,115 users and 12,890 items
Explicit dataset (TRAIN) contains 21,433 interactions of 2,115 users and 12,890 items
Implicit dataset (READ/READING/TAG/COMMENT) contains 211,194 interactions of 2,115 users and 12,890 items
fit the model
evaluate the model
MRR: 0.0458
nDCG: 0.0490
nDCG@10: 0.0397
nDCG@5: 0.0292
MAP: 0.0268
success@10: 0.1136
success@5: 0.0685
|
SPWLA_Facies_Classification.ipynb | ###Markdown
###Code
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import StandardScaler
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy.signal import butter, filtfilt
from sklearn.model_selection import train_test_split,learning_curve
pd.set_option('display.expand_frame_repr', False)
pd.set_option('display.max_columns', None)
# pd.set_option('display.max_rows', None)
#Define the Filter function in toder to create the well log facies
def butter_lowpass(cutoff, fs, order=5):
nyq = 0.5 * fs
normal_cutoff = cutoff / nyq
b, a = butter(order, normal_cutoff, btype='low', analog=False)
return b, a
def butter_lowpass_filter(data, cutoff, fs, order=5):
b, a = butter_lowpass(cutoff, fs, order=order)
y = filtfilt(b, a, data)
return y
data = pd.read_csv('/content/drive/My Drive/Earthid/PythonBatch/well1.csv')
data['VELP']=1000000/data.DT
data = data[['DEPTH', 'RHOB', 'VELP', 'GR','FACIES' ]]
data = data.dropna(how='any')
data['RHOBF'] = butter_lowpass_filter(data.RHOB.values,10,1000/1, order=5)
data['VELPF'] = butter_lowpass_filter(data.VELP.values,10,1000/1, order=5)
data['GRF'] = butter_lowpass_filter(data.GR.values,10,1000/1, order=5)
data = data[['DEPTH', 'RHOB', 'VELP', 'GR','FACIES' ]]
data.columns
X_train = data.iloc[:,1:4].values
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
Y_train = data.iloc[:,-1].values
#X_train, x_test, Y_train, y_test = train_test_split(X_train, Y_train, test_size=0.2, random_state=42)
model = KNeighborsClassifier(n_neighbors=5)
model.fit(X_train,Y_train)
#### Correlation matrix
cor_train = data.corr()
cor_test = data.corr()
ax = sns.heatmap(
cor_train,
vmin=-1, vmax=1, center=0,
cmap='coolwarm',
square=True,annot = True)
ax.set_xticklabels(
ax.get_xticklabels(),
rotation=15,
horizontalalignment='right')
plt.show()
data.index
########################
y_pred = model.predict(X_train)
mnemonics = list(data.columns)
data = data.values
rows, cols = 1, 5
fig,ax = plt.subplots(nrows = rows, ncols=cols, figsize=(12,10), sharey=True)
for i in range(cols):
if i < cols-2:
ax[i].plot(data[:,i+1],data[:,0],'b', linewidth=0.8)
ax[i].minorticks_on()
ax[i].grid(which='major', linestyle='-', linewidth='0.5', color='black')
ax[i].grid(which='minor', linestyle=':', linewidth='0.5', color='black')
ax[i].set_ylim(max(data[:, 0]), min(data[:, 0]), 0)
ax[i].set_title('%s' %mnemonics[i+1])
elif i==cols-2:
F = np.vstack((data[:,-1],data[:,-1])).T
m = ax[i].imshow(F, aspect='auto',cmap='hot_r', extent=[0,1,max(data[:,0]), min(data[:,0])])
ax[i].set_title('%s' % mnemonics[i + 1])
elif i==cols-1:
F = np.vstack((y_pred,y_pred)).T
m = ax[i].imshow(F, aspect='auto',cmap='hot_r', extent=[0,1,max(data[:,0]), min(data[:,0])])
ax[i].set_title('PREDICTED')
cl = 60
y2 = data[:,3]
y1 = y2*0+cl
ax[2].fill_betweenx(data[:, 0], y1, y2, where=(y1 >= y2), color='gold', linewidth=0)
ax[2].fill_betweenx(data[:, 0], y1, y2, where=(y1 < y2), color='lime', linewidth=0)
plt.subplots_adjust(wspace=0)
plt.show()
###Output
_____no_output_____ |
notebooks/122819.ipynb | ###Markdown
1. Entropy drift, take 2
###Code
import os
import sys
sys.path.append('../examples')
sys.path.append('../jobs')
sys.path.append('../training_data')
from tqdm import trange
import torch
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config
from generate_with_entropy import sample_sequence, sample_sequence_batch
import logging
logging.getLogger('transformers.tokenization_utils').setLevel(logging.ERROR)
# setup cell
def set_seed(seed=42, n_gpu=0):
np.random.seed(seed)
torch.manual_seed(seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(args.seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpus = torch.cuda.device_count()
set_seed()
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.to(device)
model.eval()
vocab_size = tokenizer.vocab_size
# generation, no truncation
file = '../training_data/gbw/test/100_lines.txt'
# file = '../training_data/cats.txt'
length = 100
# full gen, take 2
# turned off gradients for entropy, imported other method to deal with combining batches.
k = 0
with torch.no_grad():
avg_ents = torch.zeros((1, length), device=device)
i = 0
with open(file) as fp:
for line in fp:
context = tokenizer.encode(line)
ents = sample_sequence_batch(
model=model,
context=context,
length=length,
tokenizer=tokenizer,
temperature=0.968,
top_k=k,
top_p=0.0,
batch_size=32,
num_samples=128,
is_xlnet=False,
device=device
)
ents = ents.mean(axis=0)
avg_ents = (avg_ents * i + ents) / (i + 1)
i += 1
test = avg_ents.cpu().numpy()[0]
save_ents = np.zeros((2, length))
save_ents[0, :] = avg_ents.cpu().numpy()[0]
# full gen, take 2
# turned off gradients for entropy, imported other method to deal with combining batches.
k = 0
with torch.no_grad():
avg_ents = torch.zeros((1, length), device=device)
i = 0
with open(file) as fp:
for line in fp:
context = tokenizer.encode(line)
ents = sample_sequence_batch(
model=model,
context=context,
length=length,
tokenizer=tokenizer,
temperature=0.9957,
top_k=k,
top_p=0.0,
batch_size=32,
num_samples=128,
is_xlnet=False,
device=device
)
ents = ents.mean(axis=0)
avg_ents = (avg_ents * i + ents) / (i + 1)
i += 1
save_ents[1, :] = avg_ents.cpu().numpy()[0]
test = avg_ents.cpu().numpy()[0]
test
plt.plot(np.exp(old_ents[0, 9:]), label='full')
plt.plot(np.exp(test[9:]), label='T = 0.968')
plt.xlabel('t')
plt.ylabel('$e^H$')
plt.legend()
old_ents = np.load('cache_2.npz')['save_ents']
old_ents
plt.plot(np.exp(old_ents[0, 9:]), label='full')
plt.plot(np.exp(save_ents[0, 9:]), label='T = 1.1')
plt.plot(np.exp(save_ents[1, 9:]), label='T = 0.5')
plt.xlabel('t')
plt.ylabel('$e^H$')
plt.legend()
plt.plot(np.exp(old_ents[0, 9:]), label='no calibration')
plt.plot(np.exp(test[9:]), label='T = 0.998')
plt.xlabel('t')
plt.ylabel('$e^H$')
plt.legend()
print(np.exp(old_ents[0, 9]) - np.exp(old_ents[0, 99]))
print(np.exp(old_ents[1, 9]) - np.exp(old_ents[1, 99]))
print(np.exp(old_ents[2, 9]) - np.exp(old_ents[2, 99]))
print(np.exp(old_ents[3, 9]) - np.exp(old_ents[3, 99]))
print(np.exp(save_ents[0, 9]) - np.exp(save_ents[0, 99]))
print(np.exp(save_ents[1, 9]) - np.exp(save_ents[1, 99]))
print(np.exp(test[9]) - np.exp(test[99]))
# full gen, take 2
# turned off gradients for entropy, imported other method to deal with combining batches.
k = 0
with torch.no_grad():
avg_ents = torch.zeros((1, length), device=device)
i = 0
with open(file) as fp:
for line in fp:
context = tokenizer.encode(line)
ents = sample_sequence_batch(
model=model,
context=context,
length=length,
tokenizer=tokenizer,
temperature=1,
top_k=k,
top_p=0.0,
batch_size=32,
num_samples=128,
is_xlnet=False,
device=device
)
ents = ents.mean(axis=0)
avg_ents = (avg_ents * i + ents) / (i + 1)
i += 1
save_ents = np.zeros((5, length))
save_ents[0, :] = avg_ents.cpu().numpy()[0]
# full gen, take 2
k = 2048
with torch.no_grad():
avg_ents = torch.zeros((1, length), device=device)
i = 0
with open(file) as fp:
for line in fp:
context = tokenizer.encode(line)
ents = sample_sequence_batch(
model=model,
context=context,
length=length,
tokenizer=tokenizer,
temperature=1,
top_k=k,
top_p=0.0,
batch_size=32,
num_samples=128,
is_xlnet=False,
device=device
)
ents = ents.mean(axis=0)
avg_ents = (avg_ents * i + ents) / (i + 1)
i += 1
save_ents[1, :] = avg_ents.cpu().numpy()[0]
# full gen, take 2
k = 512
with torch.no_grad():
avg_ents = torch.zeros((1, length), device=device)
i = 0
with open(file) as fp:
for line in fp:
context = tokenizer.encode(line)
ents = sample_sequence_batch(
model=model,
context=context,
length=length,
tokenizer=tokenizer,
temperature=1,
top_k=k,
top_p=0.0,
batch_size=32,
num_samples=128,
is_xlnet=False,
device=device
)
ents = ents.mean(axis=0)
avg_ents = (avg_ents * i + ents) / (i + 1)
i += 1
save_ents[2, :] = avg_ents.cpu().numpy()[0]
# full gen, take 2
k = 128
with torch.no_grad():
avg_ents = torch.zeros((1, length), device=device)
i = 0
with open(file) as fp:
for line in fp:
context = tokenizer.encode(line)
ents = sample_sequence_batch(
model=model,
context=context,
length=length,
tokenizer=tokenizer,
temperature=1,
top_k=k,
top_p=0.0,
batch_size=32,
num_samples=128,
is_xlnet=False,
device=device
)
ents = ents.mean(axis=0)
avg_ents = (avg_ents * i + ents) / (i + 1)
i += 1
save_ents[3, :] = avg_ents.cpu().numpy()[0]
# full gen, take 2
k = 40
with torch.no_grad():
avg_ents = torch.zeros((1, length), device=device)
i = 0
with open(file) as fp:
for line in fp:
context = tokenizer.encode(line)
ents = sample_sequence_batch(
model=model,
context=context,
length=length,
tokenizer=tokenizer,
temperature=1,
top_k=k,
top_p=0.0,
batch_size=32,
num_samples=128,
is_xlnet=False,
device=device
)
ents = ents.mean(axis=0)
avg_ents = (avg_ents * i + ents) / (i + 1)
i += 1
save_ents[4, :] = avg_ents.cpu().numpy()[0]
save_ents = np.load('cache_2.npz')['save_ents']
# why is there this weird spike at token 3 or so?
a = 9
plt.plot(np.exp(old_ents[0, a:]), label='full')
plt.plot(np.exp(old_ents[1, a:]), label='top 2048')
plt.plot(np.exp(old_ents[2, a:]), label='top 512')
plt.plot(np.exp(old_ents[4, a:]), label='top 40')
plt.xlabel('t')
plt.ylabel('$e^H$')
plt.legend()
plt.plot(np.exp(save_ents[0, :]), label='full')
plt.plot(np.exp(save_ents[1, :]), label='top 2048')
plt.plot(np.exp(save_ents[2, :]), label='top 512')
plt.plot(np.exp(save_ents[3, :]), label='top 128')
plt.plot(np.exp(save_ents[4, :]), label='top 40')
plt.xlabel('t')
plt.ylabel('$e^H$')
plt.legend()
plt.title('Entropy blowup')
np.savez('cache_2', save_ents=save_ents)
# probing what happens to top 40 as we go for longer
length = 150
k = 40
with torch.no_grad():
avg_ents = torch.zeros((1, length), device=device)
i = 0
with open(file) as fp:
for line in fp:
context = tokenizer.encode(line)
ents = sample_sequence_batch(
model=model,
context=context,
length=length,
tokenizer=tokenizer,
temperature=1,
top_k=k,
top_p=0.0,
batch_size=32,
num_samples=128,
is_xlnet=False,
device=device
)
ents = ents.mean(axis=0)
avg_ents = (avg_ents * i + ents) / (i + 1)
i += 1
np.savez('top40_long', avg_ents=avg_ents.cpu().numpy())
test = avg_ents.cpu().numpy()[0]
plt.plot(np.exp(test[4:]))
###Output
_____no_output_____ |
lessons/20180403_Comparative_genomics_with_Circos_Like/Circos_output_config_files.ipynb | ###Markdown
Change name and location of the output file : config filesThis is a short sidetrack from the Circos_demo notebook. Here we try to put the change where Circos saves it's output in the cofnig file. You can learn a bit more on config files by going through this sidetrack. We can tell that Circos saved the plot in two formats (svg and png) in the directory from which we run this notebook (./). It is better to save the output somewhere else, namely in the output directory we created. We can specify the desired output location and name of the file on the commandline with `-outputdir /path/to/your/output/directory` and `-outputfile yourimage.png`, but we can also specify this in the configfile, which I prefer, because then we have a log of which settings lead to which output. After this, the file becomes: 1: Fo_vs_Fg.circos.conf. 2: 3: karyotype = circos/input/karyotypes/Fg_PH1.karyotype.txt, circos/input/karyotypes/Fol4287.karyotype.txt 4: 5: 6: 7: 8: default = 0.005r 9: 10: 11: radius = 0.9r 12: thickness = 20p 13: fill = yes 14: 15: show_label = yes 16: label_radius = dims(ideogram,radius_outer) + 10p 17: label_font = default 18: label_size = 24p 19: label_parallel = yes 20: 21: 22: 23: The remaining content is standard and required. It is imported 24: from default files in the Circos distribution. 25: 26: These should be present in every Circos configuration file and 27: overridden as required. To see the content of these files, 28: look in etc/ in the Circos distribution. 29: 30: 31: Included from Circos distribution. 32: dir = output/ 33: file = circos_karyotype_with_labels.png 34: 35: > 36: 37: 36: RGB/HSV color definitions, color lists, location of fonts, fill patterns. 37: Included from Circos distribution. 38: > 39: 40: Debugging, I/O an dother system parameters 41: Included from Circos distribution. 42: >
###Code
%%bash
/Applications/circos-0.69-6/bin/circos -conf circos/input/config_files/Fo_vs_Fg.circos.specify_output.conf
###Output
debuggroup summary 0.14s welcome to circos v0.69-6 31 July 2017 on Perl 5.018002
debuggroup summary 0.14s current working directory /Users/like/Dropbox/00.Projects/CircosPlot_FgFo
debuggroup summary 0.14s command /Applications/circos-0.69-6/bin/circos -conf circos/input/config_files/Fo_vs_Fg.circos.specify_output.conf
debuggroup summary 0.14s loading configuration from file circos/input/config_files/Fo_vs_Fg.circos.specify_output.conf
debuggroup summary 0.14s found conf file circos/input/config_files/Fo_vs_Fg.circos.specify_output.conf
$VAR1 = {
angle_offset => '-90',
auto_alpha_colors => 1,
auto_alpha_steps => '5',
background => 'white',
dir => [
'./circos/output',
'output/'
],
file => [
'circos_karyotype_with_labels.png',
'circos_karyotype_with_labels.png'
],
png => 1,
radius => '1500p',
svg => 1
};
*** CIRCOS ERROR ***
cwd: /Users/like/Dropbox/00.Projects/CircosPlot_FgFo
command: /Applications/circos-0.69-6/bin/circos -conf
circos/input/config_files/Fo_vs_Fg.circos.specify_output.conf
CONFIGURATION FILE ERROR
Configuration parameter [file] in parent block [image] has been defined more
than once in the block shown above, and has been interpreted as a list. This
is not allowed. Did you forget to comment out an old value of the parameter?
If you are having trouble debugging this error, first read the best practices
tutorial for helpful tips that address many common problems
http://www.circos.ca/documentation/tutorials/reference/best_practices
The debugging facility is helpful to figure out what's happening under the
hood
http://www.circos.ca/documentation/tutorials/configuration/debugging
If you're still stumped, get support in the Circos Google Group.
http://groups.google.com/group/circos-data-visualization
Please include this error, all your configuration, data files and the version
of Circos you're running (circos -v).Do not email me directly -- please use
the group.
Stack trace:
###Markdown
Aha! An error!Errors are an excellent opportunity to better understand how a program really works. Let's read the error message, at some point it says: Configuration parameter [file] in parent block [image] has been defined more than once in the block shown above, and has been interpreted as a list. This is not allowed. Did you forget to comment out an old value of the parameter?This tells us what went wrong. The configuration file is divided into blocks: e.g. `` (line 5 - 20), specifying the chromosomes, and then `` (line 30 - 36).In this block (`parent block [image]`), we specified the name of output image (line 33): `file = circos_karyotype_with_labels.png`): this is Configuration parameter [file]. This makes sense because it is the only thing we changed compared to the previous config file which worked fine. Let's try to solve the problem:The `` block also contains `>`, another configuration file. Circos has a main config file in which you can import (include) other config files, so that when you start making really complicated plots, you don't have a single enormous config file in which you have to search for the parameters you want adjust.In the directory/folder where you installed Circos (in my case `/Applications/circos-0.69-6/`) there is a folder `/etc` wich contain a file `image.conf`. Read this file:
###Code
%%bash
less /Applications/circos-0.69-6/etc/image.conf
###Output
<<include image.generic.conf>>
<<include background.white.conf>>
###Markdown
So this file also import other files, let's have a look at those:
###Code
%%bash
less /Applications/circos-0.69-6/etc/image.generic.conf
###Output
dir = .
#dir = conf(configdir)
file = circos.png
png = yes
svg = yes
# radius of inscribed circle in image
radius = 1500p
# by default angle=0 is at 3 o'clock position
angle_offset = -90
#angle_orientation = counterclockwise
auto_alpha_colors = yes
auto_alpha_steps = 5
###Markdown
Here we see that `dir` and `file` are already defined. We give Circos two options and the program will not choose. If we look back at the error message we see: dir => [ '.', 'output/' ], file => [ 'circos.png', 'circos_karyotype_with_labels.png' ],So we see that indeed Circos has two options here, the one specified in our config file `circos/input/config_files/Fo_vs_Fg.circos.conf` and one specified in `/Applications/circos-0.69-6/etc/image.generic.conf`. Let's make a new file `image.generic.Fo_vs_Fg.conf` that we will include in our config file.
###Code
%%bash
cp /Applications/circos-0.69-6/etc/image.generic.conf ./circos/input/config_files/
###Output
_____no_output_____
###Markdown
Open these files in a text editor and change the following: In `image.generic.conf`change dir = . into dir = ./circos/output and file = circos.pnginto file = circos_karyotype_with_labels.pngWe don't need to change anything more. Circos first looks for files you've specified to include (such as `image.generic.conf`) in the folder (or folders below) the one from which you run Circos (in this this is the working directory). So Circos automatically takes the `image.generic.conf` in the working directory and ignores the one in `/etc` ( it only look s for that one if it can't find an include config file in or below the folder it is run from). However, you should realize that it can be rather unclear for other users if you use the same file name. I prefer to rename `image.generic.conf` to something that is specific for this project, rather have my future self or someone else figure out where Circos looks for configfiles first, etc.
###Code
%%bash
mv ./circos/input/config_files/image.generic.conf ./circos/input/config_files/image.generic.Fo_vs_Fg.conf
cp /Applications/circos-0.69-6/etc/image.conf ./circos/input/config_files/image.Fo_vs_Fg.conf
###Output
_____no_output_____ |
A/code/ex_Titanic_Pandas.ipynb | ###Markdown
Agegroup가 Unknown이면
###Code
for x in range(len(train["AgeGroup"])):
if train["AgeGroup"][x] == "Unknown":
train["AgeGroup"][x] = age_title_mapping[train["Title"][x]]
for x in range(len(test["AgeGroup"])):
if test["AgeGroup"][x] == "Unknown":
test["AgeGroup"][x] = age_title_mapping[test["Title"][x]]
# Unknown 채우기
#map each Age value to a numerical value
age_mapping = {'Baby': 1, 'Child': 2, 'Teenager': 3, 'Student': 4, 'Young Adult': 5, 'Adult': 6, 'Senior': 7}
train['AgeGroup'] = train['AgeGroup'].map(age_mapping)
test['AgeGroup'] = test['AgeGroup'].map(age_mapping)
train.head()
###Output
_____no_output_____ |
examples/01_Defining_Parameters.ipynb | ###Markdown
Defining parameters================This notebook demonstrates how I typically define model parameters. The most critical parameter is the baseline, as this will affect all the statistics rasters produced by the algorithm. It is important to define a baseline period that reflects the conditions that are *expected* on the target date.
###Code
%matplotlib inline
import pandas as pd
from datetime import datetime
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
import re
from datetime import timedelta
import ee
ee.Initialize()
from s1flood import calc_basemean, calc_basesd, calc_zscore
from s1flood import mapFloods, floodPalette
import geemap
from geemap import ee_basemaps
from ipywidgets import Label
###Output
_____no_output_____
###Markdown
Use the map below to define an point of interest. An area around this point will be used to filter the Sentinel-1 ImageCollection and to centre the maps.
###Code
def parseClickedCoordinates(label):
coords = [float(c) for c in re.findall(r'(?:-)?[0-9]+.[0-9]+', label.value)]
coords.reverse()
return coords
l = Label()
display(l)
def handle_interaction(**kwargs):
if kwargs.get('type') == 'click':
l.value = str(kwargs.get('coordinates'))
Map = geemap.Map(basemap = ee_basemaps['Esri Satellite'])
Map.on_interaction(handle_interaction)
Map
lon, lat = parseClickedCoordinates(l)
w, h = 1, 1 # search window in degrees
geometry = ee.Geometry.Polygon(
[[[lon - w, lat - h],
[lon - w, lat + h],
[lon + w, lat + h],
[lon + w, lat - h]]]
)
###Output
_____no_output_____
###Markdown
After clicking a location on the above map and parsing the coordinates in the cell above this one, we are now ready to set up our input Sentinel-1 collection. To do so, define your target date and the start and end of the baseline period. We might come back and change these based on our results below.
###Code
targdate = "2020-03-01"
basestart = "2019-09-15"
baseend = "2020-02-01"
filters = [
ee.Filter.listContains("transmitterReceiverPolarisation", "VV"),
ee.Filter.listContains("transmitterReceiverPolarisation", "VH"),
ee.Filter.equals("instrumentMode", "IW"),
ee.Filter.geometry(geometry),
ee.Filter.date('2015-01-01', ee.Date(targdate).advance(1, 'day'))
]
###Output
_____no_output_____
###Markdown
Now with our filters set up, we can load the Sentinel-1 ImageCollection, filter it, and compute Z-scores for further analysis. We will compute the Z-scores descending orbit only. To include both orbital directions, run `calc_zscore()` independently for each orbital direction and merge the collections before proceeding.
###Code
s1 = ee.ImageCollection("COPERNICUS/S1_GRD").filter(filters)
z = calc_zscore(s1, basestart, baseend, 'IW', 'DESCENDING')
###Output
_____no_output_____
###Markdown
Once the Z-score map has loaded int he map below, click a location to extract its time series, which we will visualize in a minute.
###Code
Map = geemap.Map(basemap = geemap.ee_basemaps['Esri Satellite'])
Map.setCenter(lon, lat, 11)
Map.addLayer(s1.select('VV'), {'min': -25, 'max': 0}, 'VV Backscatter (dB); {0}'.format(targdate))
zpalette = ['#b2182b','#ef8a62','#fddbc7','#f7f7f7','#d1e5f0','#67a9cf','#2166ac']
Map.addLayer(z.select('VV'), {'min': -5, 'max': 5, 'palette': zpalette}, 'VV Z-score; {0}'.format(targdate))
label = Label()
def handle_interaction(**kwargs):
if kwargs.get('type') == 'click':
label.value = str(kwargs.get('coordinates'))
Map.on_interaction(handle_interaction)
Map
###Output
_____no_output_____
###Markdown
The function below will extract the backscatter and Z-score time series from our ImageCollection and save them as a pandas DataFrame
###Code
def get_ts(p):
x = s1.filter(ee.Filter.equals('instrumentMode', 'IW')) \
.sort('system:time_start') \
.getRegion(p, scale = 30) \
.getInfo()
xz = z.getRegion(p, scale = 30).getInfo()
x = x[1:]
xz = xz[1:]
s1df = pd.DataFrame({
'ID': [y[0] for y in x],
'VV': [y[4] for y in x],
'VH': [y[5] for y in x]
})
zdf = pd.DataFrame({
'ID': [y[0] for y in xz],
'ZVV': [y[4] for y in xz],
'ZVH': [y[5] for y in xz]
})
def get_date(f):
datestr = re.findall(r'[0-9]+T[0-9]+', f)[0]
return datetime.strptime(datestr, "%Y%m%dT%H%M%S")
s1df = s1df.assign(date = [get_date(i) for i in s1df['ID']])
zdf = zdf.assign(date = [get_date(i) for i in zdf['ID']])
df = s1df.merge(zdf, 'inner', on = 'date')[['date', 'VV', 'VH', 'ZVV', 'ZVH']]
return df
coords = parseClickedCoordinates(label)
p = ee.Geometry.Point(coords)
df = get_ts(p).query("date > '2017-01-01'") # change this date to shorten/lengthen the time series panels below
###Output
_____no_output_____
###Markdown
Now we will use matplotlib to visualize the time series. The baseline period you defined above is shown as a light blue region in these time series. You can use this plot to chedck whether your baseline period is appropriate. For example, a high standard deviation within the baseline period will result in low absolute Z-scores, which may not "dampen" the signal during actual flood events.
###Code
fig, ax = plt.subplots(2, 1, figsize = [15, 6], sharex = True)
df.plot(x = 'date', y = 'VV', ax = ax[0], style = 'k.')
df.plot(x = 'date', y = 'VH', ax = ax[0], style = 'r.')
df.plot(x = 'date', y = 'ZVV', ax = ax[1], style = 'k.')
df.plot(x = 'date', y = 'ZVH', ax = ax[1], style = 'r.')
ax[0].set_ylabel("$\sigma_{0}\;(dB)$", fontsize = 14)
ax[1].set_ylabel("$Z$", fontsize = 14)
# show baseline period
xy0 = datetime.strptime(basestart, "%Y-%m-%d"), ax[0].get_ylim()[0]
xy1 = datetime.strptime(basestart, "%Y-%m-%d"), ax[1].get_ylim()[0]
w = datetime.strptime(baseend, "%Y-%m-%d") - datetime.strptime(basestart, "%Y-%m-%d")
h0 = ax[0].get_ylim()[1] - ax[0].get_ylim()[0]
h1 = ax[1].get_ylim()[1] - ax[1].get_ylim()[0]
ax[0].add_patch(Rectangle(xy0, w, h0, alpha = 0.1))
ax[1].add_patch(Rectangle(xy1, w, h1, alpha = 0.1))
# show Z=0 line and an example Z-score threshold line
ax[1].axhline(0, c = 'black', linewidth = 0.5)
ax[1].axhline(-2.5, c = 'magenta', linestyle = '--', alpha = 0.5)
###Output
_____no_output_____
###Markdown
When you are satisfied with the parameters you have chosen, run the flood mapping algorithm.
###Code
zvv_thd = -2.5 # VV Z-score threshold
zvh_thd = -2.5 # VH Z-score threshold
pin_thd = 50 # historical P(inundation) threshold (%)
pow_thd = 90 # permanent open water threshold; historical P(open water) (%)
floods = mapFloods(z.mosaic(), zvv_thd, zvh_thd, use_dswe = True, pin_thd = pin_thd, pow_thd = pow_thd)
floods = floods.updateMask(floods.gt(0))
Map = geemap.Map(basemap = ee_basemaps['Esri Satellite'])
Map.addLayer(floods, {'min': 0, 'max': 20, 'palette': floodPalette}, "Flood Map, {0}".format(targdate))
Map.setCenter(coords[0], coords[1], 10)
Map
###Output
_____no_output_____ |
docs/source/basic_functionality/Group.ipynb | ###Markdown
Group objects
###Code
import k3d
positions=[[0,0,0], [0,1,3], [2,2,1]]
group = k3d.points(positions, point_size=0.2, shader='mesh') + \
k3d.line(positions, shader='mesh', width=0.05)
group
group + k3d.mesh(positions, [0,1,2], color=0xff70ca)
###Output
_____no_output_____
###Markdown
Group objects
###Code
import k3d
positions=[[0,0,0], [0,1,3], [2,2,1]]
group = k3d.points(positions, point_size=0.2, shader='mesh') + \
k3d.line(positions, shader='mesh', width=0.05)
group
group + k3d.mesh(positions, [0,1,2], color=0xff70ca)
###Output
_____no_output_____ |
Model backlog/Inference/270-tweet-inference-5fold-roberta-jaccard-cosine-w.ipynb | ###Markdown
Dependencies
###Code
import json, glob
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras import layers
from tensorflow.keras.models import Model
###Output
_____no_output_____
###Markdown
Load data
###Code
test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv')
print('Test samples: %s' % len(test))
display(test.head())
###Output
Test samples: 3534
###Markdown
Model parameters
###Code
input_base_path = '/kaggle/input/270-robertabase/'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
# vocab_path = input_base_path + 'vocab.json'
# merges_path = input_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
vocab_path = base_path + 'roberta-base-vocab.json'
merges_path = base_path + 'roberta-base-merges.txt'
config['base_model_path'] = base_path + 'roberta-base-tf_model.h5'
config['config_path'] = base_path + 'roberta-base-config.json'
model_path_list = glob.glob(input_base_path + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
###Output
Models to predict:
/kaggle/input/270-robertabase/model_fold_1.h5
/kaggle/input/270-robertabase/model_fold_2.h5
/kaggle/input/270-robertabase/model_fold_3.h5
/kaggle/input/270-robertabase/model_fold_4.h5
/kaggle/input/270-robertabase/model_fold_5.h5
###Markdown
Tokenizer
###Code
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
###Output
_____no_output_____
###Markdown
Pre process
###Code
test['text'].fillna('', inplace=True)
test['text'] = test['text'].apply(lambda x: x.lower())
test['text'] = test['text'].apply(lambda x: x.strip())
x_test, x_test_aux, x_test_aux_2 = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test)
###Output
_____no_output_____
###Markdown
Model
###Code
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name='base_model')
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
logits = layers.Dense(2, use_bias=False, name='qa_outputs')(last_hidden_state)
start_logits, end_logits = tf.split(logits, 2, axis=-1, name='logits')
start_logits = tf.squeeze(start_logits, axis=-1, name='y_start')
end_logits = tf.squeeze(end_logits, axis=-1, name='y_end')
x_jaccard = layers.GlobalAveragePooling1D()(last_hidden_state)
y_jaccard = layers.Dense(1, activation='linear', name='y_jaccard')(x_jaccard)
model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits, y_jaccard])
return model
###Output
_____no_output_____
###Markdown
Make predictions
###Code
NUM_TEST_IMAGES = len(test)
test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
for model_path in model_path_list:
print(model_path)
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE']))
test_start_preds += test_preds[0]
test_end_preds += test_preds[1]
###Output
/kaggle/input/270-robertabase/model_fold_1.h5
/kaggle/input/270-robertabase/model_fold_2.h5
/kaggle/input/270-robertabase/model_fold_3.h5
/kaggle/input/270-robertabase/model_fold_4.h5
/kaggle/input/270-robertabase/model_fold_5.h5
###Markdown
Post process
###Code
test['start'] = test_start_preds.argmax(axis=-1)
test['end'] = test_end_preds.argmax(axis=-1)
test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
# Post-process
test["selected_text"] = test.apply(lambda x: ' '.join([word for word in x['selected_text'].split() if word in x['text'].split()]), axis=1)
test['selected_text'] = test.apply(lambda x: x['text'] if (x['selected_text'] == '') else x['selected_text'], axis=1)
test['selected_text'].fillna(test['text'], inplace=True)
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
test['text_len'] = test['text'].apply(lambda x : len(x))
test['label_len'] = test['selected_text'].apply(lambda x : len(x))
test['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' ')))
test['label_wordCnt'] = test['selected_text'].apply(lambda x : len(x.split(' ')))
test['text_tokenCnt'] = test['text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['label_tokenCnt'] = test['selected_text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['jaccard'] = test.apply(lambda x: jaccard(x['text'], x['selected_text']), axis=1)
display(test.head(10))
display(test.describe())
###Output
_____no_output_____
###Markdown
Test set predictions
###Code
submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv')
submission['selected_text'] = test['selected_text']
submission.to_csv('submission.csv', index=False)
submission.head(10)
###Output
_____no_output_____ |
Lab_4/my_recommender.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
jester = pd.read_csv('https://raw.githubusercontent.com/albanda/CE888/master/lab4-recommender/jester-data-1.csv', header=None)
jester.head()
jester.describe().transpose()
jester_nan = jester.replace(99, np.nan)
jester_nan.isna().sum()
len(jester[0].unique())
jester_nan.drop(columns=0, axis=1, inplace=True)
jester_nan.head()
jester_nan.mean().sort_values(ascending=True)
arr = jester_nan.values
np.where(arr!= 99)
def helper_function(orig, percentage=0.1):
"""
Replaces 'percentage'% of the original values in 'orig' with 99's
:param orig: original data array
:param percentage: percentage of values to replace (0<percentage<1)
"""
new_data = orig.copy()
rated = np.where(~np.isnan(orig))
n_rated = len(rated[0])
idx = np.random.choice(n_rated, size=int(percentage*n_rated), replace=False)
new_data[rated[0][idx], rated[1][idx]] = 99
return new_data, (rated[0][idx], rated[1][idx])
new_arr, idx = helper_function(arr)
arr[idx[0][0], idx[1][0]]
new_arr[idx[0][0], idx[1][0]]
jester.groupby(0).count()
users = jester[0].unique()
users
def predict_rating(user_id, item_id):
""" Predict a rating given a user_id and an item_id.
"""
user_preference = latent_user_preferences[user_id]
item_preference = latent_item_features[item_id]
return user_preference.dot(item_preference)
def train(user_id, item_id, rating, alpha=0.0001):
#print(item_id)
predicted_rating = predict_rating(user_id, item_id)
err = predicted_rating - rating
#print(err)
user_pref_values = latent_user_preferences[user_id]
latent_user_preferences[user_id] -= alpha * err * latent_item_features[item_id]
latent_item_features[item_id] -= alpha * err * user_pref_values
return err
def sgd(iterations):
""" Iterate over all users and all items and train for
a certain number of iterations
"""
mse_history = []
for iteration in range(iterations):
error = []
for user_id in range(latent_user_preferences.shape[0]):
for item_id in range(latent_item_features.shape[0]):
rating = user_ratings[user_id, item_id]
if not np.isnan(rating):
err = train(user_id, item_id, rating)
error.append(err)
mse = (np.array(error) ** 2).mean()
if (iteration % 5) == 0:
print('Iteration %d/%d:\tMSE=%.6f' % (iteration, iterations, mse))
mse_history.append(mse)
return mse_history
n_latent_factors = 4
user_ratings = jester_nan.values
# Initialise as random values
latent_user_preferences = np.random.random((user_ratings.shape[0], n_latent_factors))
latent_item_features = np.random.random((user_ratings.shape[1], n_latent_factors))
###Output
_____no_output_____
###Markdown
MSE of validation set
###Code
err =[]
for u, i in zip(*idx):
err.append(predict_rating(u,i) - user_ratings[u,i])
print('MSE of validation set: ', (np.array(err)**2).mean())
###Output
MSE of validation set: 15.681413447844411
###Markdown
Prediction of test set
###Code
test_idx = np.where(np.isnan(user_ratings))
for u, i in zip(*test_idx):
print('Prediction for {} user_id for item_id {} is {}'.format(u, i, predict_rating(u,i)))
test_idx
num_iter = 100
hist = sgd(num_iter) # Note how the MSE decreases with the number of iterations
plt.figure()
plt.plot(np.arange(0, num_iter, 5), hist)
plt.xlabel("Iterations")
plt.ylabel("MSE")
plt.show()
jester_nan.iloc[55,13]
df_items = pd.read_excel('https://github.com/albanda/CE888/blob/master/lab4-recommender/movies_latent_factors.xlsx?raw=true')
df_users = pd.read_excel('https://github.com/albanda/CE888/blob/master/lab4-recommender/movies_latent_factors.xlsx?raw=true', sheet_name=1)
df_users.set_index('User', inplace=True)
df_items.set_index('Movie ID', inplace=True)
df_items[df_items['Factor15'] == df_items['Factor15'].min()]
items_latent = df_items.drop('Title', axis=1).values
user_latent = df_users.values
user_latent.shape
items_latent.shape
def predict_movie_rating(user, movie):
user_arr = df_users.loc[user].values
movie_arr = df_items.loc[movie].drop('Title').values
return user_arr.dot(movie_arr)
def top_n_recommendation(user, n):
pred = {}
for movie in df_items.index:
pred[movie] = predict_movie_rating(user, movie)
pred = dict(sorted(pred.items(), key=lambda kv: kv[1], reverse=True))
movies = list(pred.keys())
return movies[:n]
predict_movie_rating(783, 807)
predict_movie_rating(156, 9331)
top_n_recommendation(4940, 3)
df_items.loc[top_n_recommendation(1882, 2)]['Title']
###Output
_____no_output_____ |
code/ch08/ch08.ipynb | ###Markdown
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)https://github.com/rasbt/python-machine-learning-book[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn,nltk
###Output
Sebastian Raschka
Last updated: 01/20/2016
CPython 3.5.1
IPython 4.0.1
numpy 1.10.1
pandas 0.17.1
matplotlib 1.5.0
scikit-learn 0.17
nltk 3.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Summary](Summary) Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. Compatibility Note:I received an email from a reader who was having troubles with reading the movie review texts due to encoding issues. Typically, Python's default encoding is set to `'utf-8'`, which shouldn't cause troubles when running this IPython notebook. You can simply check the encoding on your machine by firing up a new Python interpreter from the command line terminal and execute >>> import sys >>> sys.getdefaultencoding() If the returned result is **not** `'utf-8'`, you probably need to change your Python's encoding to `'utf-8'`, for example by typing `export PYTHONIOENCODING=utf8` in your terminal shell prior to running this IPython notebook. (Note that this is a temporary change, and it needs to be executed in the same shell that you'll use to launch `ipython notebook`.Alternatively, you can replace the lines with open(os.path.join(path, file), 'r') as infile: ... pd.read_csv('./movie_data.csv') ... df.to_csv('./movie_data.csv', index=False)by with open(os.path.join(path, file), 'r', encoding='utf-8') as infile: ... pd.read_csv('./movie_data.csv', encoding='utf-8') ... df.to_csv('./movie_data.csv', index=False, encoding='utf-8') in the following cells to achieve the desired effect.
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
#basepath = '/Users/Sebastian/Desktop/aclImdb/'
basepath = './aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% 100%
[##############################] | ETA: 00:00:00
Total time elapsed: 00:06:23
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('./movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(3)
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv` file in the previous chapter, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book/tree/master/code/datasets/movie Introducing the bag-of-words model ... Transforming documents into feature vectors
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining and the weather is sweet'])
bag = count.fit_transform(docs)
print(count.vocabulary_)
print(bag.toarray())
###Output
[[0 1 1 1 0 1 0]
[0 1 0 0 1 1 1]
[1 2 1 1 1 2 1]]
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
tf_is = 2
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.899
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.cross_validation import StratifiedKFold, cross_val_score
from sklearn.linear_model import LogisticRegression
import numpy as np
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
from sklearn.grid_search import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.400000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.200000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
8장. 감성 분석에 머신 러닝 적용하기 **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.** 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기 `watermark`는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다. `watermark` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
###Code
#!pip install watermark
%load_ext watermark
%watermark -u -d -v -p numpy,pandas,sklearn,nltk
###Output
last updated: 2019-12-29
CPython 3.7.3
IPython 7.5.0
numpy 1.16.3
pandas 0.24.2
sklearn 0.22
nltk 3.4.1
###Markdown
텍스트 처리용 IMDb 영화 리뷰 데이터 준비 영화 리뷰 데이터셋 구하기 IMDB 영화 리뷰 데이터셋은 [http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz](http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz)에서 내려받을 수 있습니다. 다운로드된 후 파일 압축을 해제합니다.A) 리눅스(Linux)나 macOS를 사용하면 새로운 터미널(Terminal) 윈도우를 열고 `cd` 명령으로 다운로드 디렉터리로 이동하여 다음 명령을 실행하세요. `tar -zxf aclImdb_v1.tar.gz`B) 윈도(Windows)를 사용하면 7Zip(http://www.7-zip.org) 같은 무료 압축 유틸리티를 설치하여 다운로드한 파일의 압축을 풀 수 있습니다. **코랩이나 리눅스에서 직접 다운로드하려면 다음 셀의 주석을 제거하고 실행하세요.**
###Code
#!wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
###Output
_____no_output_____
###Markdown
**다음처럼 파이썬에서 직접 압축을 풀 수도 있습니다:**
###Code
import os
import tarfile
if not os.path.isdir('aclImdb'):
with tarfile.open('aclImdb_v1.tar.gz', 'r:gz') as tar:
tar.extractall()
###Output
_____no_output_____
###Markdown
영화 리뷰 데이터셋을 더 간편한 형태로 전처리하기 `pyprind`는 주피터 노트북에서 진행바를 출력하기 위한 유틸리티입니다. `pyprind` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
###Code
#!pip install pyprind
import pyprind
import pandas as pd
import os
# `basepath`를 압축 해제된 영화 리뷰 데이터셋이 있는
# 디렉토리로 바꾸세요
basepath = 'aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in sorted(os.listdir(path)):
with open(os.path.join(path, file),
'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]],
ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% [##############################] 100% | ETA: 00:00:00
Total time elapsed: 00:01:50
###Markdown
데이터프레임을 섞습니다:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
선택사항: 만들어진 데이터를 CSV 파일로 저장합니다:
###Code
df.to_csv('movie_data.csv', index=False, encoding='utf-8')
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
df.shape
###Output
_____no_output_____
###Markdown
BoW 모델 소개 단어를 특성 벡터로 변환하기 CountVectorizer의 fit_transform 메서드를 호출하여 BoW 모델의 어휘사전을 만들고 다음 세 문장을 희소한 특성 벡터로 변환합니다:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
어휘 사전의 내용을 출력해 보면 BoW 모델의 개념을 이해하는 데 도움이 됩니다:
###Code
print(count.vocabulary_)
###Output
{'the': 6, 'sun': 4, 'is': 1, 'shining': 3, 'weather': 8, 'sweet': 5, 'and': 0, 'one': 2, 'two': 7}
###Markdown
이전 결과에서 볼 수 있듯이 어휘 사전은 고유 단어와 정수 인덱스가 매핑된 파이썬 딕셔너리에 저장되어 있습니다. 그다음 만들어진 특성 벡터를 출력해 봅시다: 특성 벡터의 각 인덱스는 CountVectorizer의 어휘 사전 딕셔너리에 저장된 정수 값에 해당됩니다. 예를 들어 인덱스 0에 있는 첫 번째 특성은 ‘and’ 단어의 카운트를 의미합니다. 이 단어는 마지막 문서에만 나타나네요. 인덱스 1에 있는 (특성 벡터의 두 번째 열) 단어 ‘is’는 세 문장에 모두 등장합니다. 특성 벡터의 이런 값들을 단어 빈도(term frequency) 라고도 부릅니다. 문서 d에 등장한 단어 t의 횟수를 *tf (t,d)*와 같이 씁니다.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
tf-idf를 사용해 단어 적합성 평가하기
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
텍스트 데이터를 분석할 때 클래스 레이블이 다른 문서에 같은 단어들이 나타나는 경우를 종종 보게 됩니다. 일반적으로 자주 등장하는 단어는 유용하거나 판별에 필요한 정보를 가지고 있지 않습니다. 이 절에서 특성 벡터에서 자주 등장하는 단어의 가중치를 낮추는 기법인 tf-idf(term frequency-inverse document frequency)에 대해 배우겠습니다. tf-idf는 단어 빈도와 역문서 빈도(inverse document frequency)의 곱으로 정의됩니다:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$여기에서 tf(t, d)는 이전 절에서 보았던 단어 빈도입니다. *idf(t, d)*는 역문서 빈도로 다음과 같이 계산합니다:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$여기에서 $n_d$는 전체 문서 개수이고 *df(d, t)*는 단어 t가 포함된 문서 d의 개수입니다. 분모에 상수 1을 추가하는 것은 선택 사항입니다. 훈련 샘플에 한 번도 등장하지 않는 단어가 있는 경우 분모가 0이 되지 않게 만듭니다. log는 문서 빈도 *df(d, t)*가 낮을 때 역문서 빈도 값이 너무 커지지 않도록 만듭니다.사이킷런 라이브러리에는 `CountVectorizer` 클래스에서 만든 단어 빈도를 입력받아 tf-idf로 변환하는 `TfidfTransformer` 클래스가 구현되어 있습니다:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,
norm='l2',
smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs))
.toarray())
###Output
[[0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
이전 절에서 보았듯이 세 번째 문서에서 단어 ‘is’가 가장 많이 나타났기 때문에 단어 빈도가 가장 컸습니다. 동일한 특성 벡터를 tf-idf로 변환하면 단어 ‘is’는 비교적 작은 tf-idf를 가집니다(0.45). 이 단어는 첫 번째와 두 번째 문서에도 나타나므로 판별에 유용한 정보를 가지고 있지 않을 것입니다. 수동으로 특성 벡터에 있는 각 단어의 tf-idf를 계산해 보면 `TfidfTransformer`가 앞서 정의한 표준 tf-idf 공식과 조금 다르게 계산한다는 것을 알 수 있습니다. 사이킷런에 구현된 역문서 빈도 공식은 다음과 같습니다. $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$비슷하게 사이킷런에서 계산하는 tf-idf는 앞서 정의한 공식과 조금 다릅니다:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$일반적으로 tf-idf를 계산하기 전에 단어 빈도(tf)를 정규화하지만 `TfidfTransformer` 클래스는 tf-idf를 직접 정규화합니다. 사이킷런의 `TfidfTransformer`는 기본적으로 L2 정규화를 적용합니다(norm=’l2’). 정규화되지 않은 특성 벡터 v를 L2-노름으로 나누면 길이가 1인 벡터가 반환됩니다:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$TfidfTransformer의 작동 원리를 이해하기 위해 세 번째 문서에 있는 단어 ‘is'의 tf-idf를 예로 들어 계산해 보죠.세 번째 문서에서 단어 ‘is’의 단어 빈도는 3입니다(tf=3). 이 단어는 세 개 문서에 모두 나타나기 때문에 문서 빈도가 3입니다(df=3). 따라서 역문서 빈도는 다음과 같이 계산됩니다:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$이제 tf-idf를 계산하기 위해 역문서 빈도에 1을 더하고 단어 빈도를 곱합니다:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
세 번째 문서에 있는 모든 단어에 대해 이런 계산을 반복하면 tf-idf 벡터 [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0, 1.69, 1.29]를 얻습니다. 이 특성 벡터의 값은 앞서 사용했던 TfidfTransformer에서 얻은 값과 다릅니다. tf-idf 계산에서 빠트린 마지막 단계는 다음과 같은 L2-정규화입니다:: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ 결과에서 보듯이 사이킷런의 `TfidfTransformer`에서 반환된 결과와 같아졌습니다. tf-idf 계산 방법을 이해했으므로 다음 절로 넘어가 이 개념을 영화 리뷰 데이터셋에 적용해 보죠.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
텍스트 데이터 정제
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text)
text = (re.sub('[\W]+', ' ', text.lower()) +
' '.join(emoticons).replace('-', ''))
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
df['review'].map(preprocessor)
###Output
_____no_output_____
###Markdown
문서를 토큰으로 나누기
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
문서 분류를 위한 로지스틱 회귀 모델 훈련하기
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(solver='liblinear', random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=1)
###Output
_____no_output_____
###Markdown
**`n_jobs` 매개변수에 대하여**앞의 코드 예제에서 컴퓨터에 있는 모든 CPU 코어를 사용해 그리드 서치의 속도를 높이려면 (`n_jobs=1` 대신) `n_jobs=-1`로 지정하는 것이 좋습니다. 일부 시스템에서는 멀티프로세싱을 위해 `n_jobs=-1`로 지정할 때 `tokenizer` 와 `tokenizer_porter` 함수의 직렬화에 문제가 발생할 수 있습니다. 이런 경우 `[tokenizer, tokenizer_porter]`를 `[str.split]`로 바꾸어 문제를 해결할 수 있습니다. 다만 `str.split`로 바꾸면 어간 추출을 하지 못합니다. **코드 실행 시간에 대하여**다음 코드 셀을 실행하면 시스템에 따라 **30~60분 정도 걸릴 수 있습니다**. 매개변수 그리드에서 정의한 대로 2*2*2*3*5 + 2*2*2*3*5 = 240개의 모델을 훈련하기 때문입니다.**코랩을 사용할 경우에도 CPU 코어가 많지 않기 때문에 실행 시간이 오래 걸릴 수 있습니다.**너무 오래 기다리기 어렵다면 데이터셋의 훈련 샘플의 수를 다음처럼 줄일 수 있습니다: X_train = df.loc[:2500, 'review'].values y_train = df.loc[:2500, 'sentiment'].values 훈련 세트 크기를 줄이는 것은 모델 성능을 감소시킵니다. 그리드에 지정한 매개변수를 삭제하면 훈련한 모델 수를 줄일 수 있습니다. 예를 들면 다음과 같습니다: param_grid = [{'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0]}, ]
###Code
gs_lr_tfidf.fit(X_train, y_train)
print('최적의 매개변수 조합: %s ' % gs_lr_tfidf.best_params_)
print('CV 정확도: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('테스트 정확도: %.3f' % clf.score(X_test, y_test))
###Output
테스트 정확도: 0.899
###Markdown
대용량 데이터 처리-온라인 알고리즘과 외부 메모리 학습
###Code
# 이 셀의 코드는 책에 포함되어 있지 않습니다. This cell is not contained in the book but
# 이전 코드를 실행하지 않고 바로 시작할 수 있도록 편의를 위해 추가했습니다.
import os
import gzip
if not os.path.isfile('movie_data.csv'):
if not os.path.isfile('movie_data.csv.gz'):
print('Please place a copy of the movie_data.csv.gz'
'in this directory. You can obtain it by'
'a) executing the code in the beginning of this'
'notebook or b) by downloading it from GitHub:'
'https://github.com/rasbt/python-machine-learning-'
'book-2nd-edition/blob/master/code/ch08/movie_data.csv.gz')
else:
in_f = gzip.open('movie_data.csv.gz', 'rb')
out_f = open('movie_data.csv', 'wb')
out_f.write(in_f.read())
in_f.close()
out_f.close()
import numpy as np
import re
from nltk.corpus import stopwords
# `stop` 객체를 앞에서 정의했지만 이전 코드를 실행하지 않고
# 편의상 여기에서부터 코드를 실행하기 위해 다시 만듭니다.
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # 헤더 넘기기
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
pass
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
clf = SGDClassifier(loss='log', random_state=1, max_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('정확도: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
토픽 모델링 사이킷런의 LDA
###Code
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=.1,
max_features=5000)
X = count.fit_transform(df['review'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_components=10,
random_state=123,
learning_method='batch')
X_topics = lda.fit_transform(X)
lda.components_.shape
n_top_words = 5
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print("토픽 %d:" % (topic_idx + 1))
print(" ".join([feature_names[i]
for i in topic.argsort()\
[:-n_top_words - 1:-1]]))
###Output
토픽 1:
worst minutes awful script stupid
토픽 2:
family mother father children girl
토픽 3:
american war dvd music tv
토픽 4:
human audience cinema art sense
토픽 5:
police guy car dead murder
토픽 6:
horror house sex girl woman
토픽 7:
role performance comedy actor performances
토픽 8:
series episode war episodes tv
토픽 9:
book version original read novel
토픽 10:
action fight guy guys cool
###Markdown
각 토픽에서 가장 중요한 단어 다섯 개를 기반으로 LDA가 다음 토픽을 구별했다고 추측할 수 있습니다.1. 대체적으로 형편없는 영화(실제 토픽 카테고리가 되지 못함)2. 가족 영화3. 전쟁 영화4. 예술 영화5. 범죄 영화6. 공포 영화7. 코미디 영화8. TV 쇼와 관련된 영화9. 소설을 원작으로 한 영화10. 액션 영화 카테고리가 잘 선택됐는지 확인하기 위해 공포 영화 카테고리에서 3개 영화의 리뷰를 출력해 보죠(공포 영화는 카테고리 6이므로 인덱스는 5입니다):
###Code
horror = X_topics[:, 5].argsort()[::-1]
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\n공포 영화 #%d:' % (iter_idx + 1))
print(df['review'][movie_idx][:300], '...')
###Output
공포 영화 #1:
House of Dracula works from the same basic premise as House of Frankenstein from the year before; namely that Universal's three most famous monsters; Dracula, Frankenstein's Monster and The Wolf Man are appearing in the movie together. Naturally, the film is rather messy therefore, but the fact that ...
공포 영화 #2:
Okay, what the hell kind of TRASH have I been watching now? "The Witches' Mountain" has got to be one of the most incoherent and insane Spanish exploitation flicks ever and yet, at the same time, it's also strangely compelling. There's absolutely nothing that makes sense here and I even doubt there ...
공포 영화 #3:
<br /><br />Horror movie time, Japanese style. Uzumaki/Spiral was a total freakfest from start to finish. A fun freakfest at that, but at times it was a tad too reliant on kitsch rather than the horror. The story is difficult to summarize succinctly: a carefree, normal teenage girl starts coming fac ...
###Markdown
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)https://github.com/rasbt/python-machine-learning-book[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,sklearn,nltk
###Output
Sebastian Raschka
last updated: 2016-09-29
CPython 3.5.2
IPython 5.1.0
numpy 1.11.1
pandas 0.18.1
matplotlib 1.5.1
sklearn 0.18
nltk 3.2.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Summary](Summary)
###Code
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
###Output
_____no_output_____
###Markdown
Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. Compatibility Note:I received an email from a reader who was having troubles with reading the movie review texts due to encoding issues. Typically, Python's default encoding is set to `'utf-8'`, which shouldn't cause troubles when running this IPython notebook. You can simply check the encoding on your machine by firing up a new Python interpreter from the command line terminal and execute >>> import sys >>> sys.getdefaultencoding() If the returned result is **not** `'utf-8'`, you probably need to change your Python's encoding to `'utf-8'`, for example by typing `export PYTHONIOENCODING=utf8` in your terminal shell prior to running this IPython notebook. (Note that this is a temporary change, and it needs to be executed in the same shell that you'll use to launch `ipython notebook`.Alternatively, you can replace the lines with open(os.path.join(path, file), 'r') as infile: ... pd.read_csv('./movie_data.csv') ... df.to_csv('./movie_data.csv', index=False)by with open(os.path.join(path, file), 'r', encoding='utf-8') as infile: ... pd.read_csv('./movie_data.csv', encoding='utf-8') ... df.to_csv('./movie_data.csv', index=False, encoding='utf-8') in the following cells to achieve the desired effect.
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
#basepath = '/Users/Sebastian/Desktop/aclImdb/'
basepath = './aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% 100%
[##############################] | ETA: 00:00:00
Total time elapsed: 00:09:04
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('./movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(3)
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv` file in the previous chapter, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book/tree/master/code/datasets/movie Introducing the bag-of-words model ... Transforming documents into feature vectors By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
###Code
print(count.vocabulary_)
###Output
{'one': 2, 'sweet': 5, 'the': 6, 'shining': 3, 'weather': 8, 'and': 0, 'two': 7, 'is': 1, 'sun': 4}
###Markdown
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$Here the tf(t, d) is the term frequency that we introduced in the previous section,and the inverse document frequency *idf(t, d)* can be calculated as:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$where $n_d$ is the total number of documents, and *df(d, t)* is the number of documents *d* that contain the term *t*. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
###Output
[[ 0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[ 0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[ 0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are: $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$To make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.899
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.linear_model import LogisticRegression
import numpy as np
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import cross_val_score
else:
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
if Version(sklearn_version) < '0.18':
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
else:
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.400000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.200000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
*Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-editionCode License: [MIT License](https://github.com/rasbt/python-machine-learning-book-2nd-edition/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s). *The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.*
###Code
%load_ext watermark
%watermark -a "Sebastian Raschka" -u -d -v -p numpy,pandas,sklearn,nltk
! pip install pyprind
###Output
_____no_output_____
###Markdown
Overview - [Preparing the IMDb movie review data for text processing](Preparing-the-IMDb-movie-review-data-for-text-processing) - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset) - [Preprocessing the movie dataset into more convenient format](Preprocessing-the-movie-dataset-into-more-convenient-format)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Topic modeling](Topic-modeling) - [Decomposing text documents with Latent Dirichlet Allocation](Decomposing-text-documents-with-Latent-Dirichlet-Allocation) - [Latent Dirichlet Allocation with scikit-learn](Latent-Dirichlet-Allocation-with-scikit-learn)- [Summary](Summary) Preparing the IMDb movie review data for text processing Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. **Optional code to download and unzip the dataset via Python:**
###Code
import os
import sys
import tarfile
import time
source = 'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
target = 'aclImdb_v1.tar.gz'
def reporthook(count, block_size, total_size):
global start_time
if count == 0:
start_time = time.time()
return
duration = time.time() - start_time
progress_size = int(count * block_size)
speed = progress_size / (1024.**2 * duration)
percent = count * block_size * 100. / total_size
sys.stdout.write("\r%d%% | %d MB | %.2f MB/s | %d sec elapsed" %
(percent, progress_size / (1024.**2), speed, duration))
sys.stdout.flush()
if not os.path.isdir('aclImdb') and not os.path.isfile('aclImdb_v1.tar.gz'):
if (sys.version_info < (3, 0)):
import urllib
urllib.urlretrieve(source, target, reporthook)
else:
import urllib.request
urllib.request.urlretrieve(source, target, reporthook)
if not os.path.isdir('aclImdb'):
with tarfile.open(target, 'r:gz') as tar:
tar.extractall()
###Output
_____no_output_____
###Markdown
Preprocessing the movie dataset into more convenient format
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
basepath = 'aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file),
'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]],
ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
_____no_output_____
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('movie_data.csv', index=False, encoding='utf-8')
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv` file in the previous chapter, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book-2nd-edition/tree/master/code/ch08/ Introducing the bag-of-words model ... Transforming documents into feature vectors By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
###Code
print(count.vocabulary_)
###Output
_____no_output_____
###Markdown
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
###Code
print(bag.toarray())
###Output
_____no_output_____
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$Here the tf(t, d) is the term frequency that we introduced in the previous section,and the inverse document frequency *idf(t, d)* can be calculated as:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$where $n_d$ is the total number of documents, and *df(d, t)* is the number of documents *d* that contain the term *t*. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,
norm='l2',
smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs))
.toarray())
###Output
_____no_output_____
###Markdown
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are: $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$To make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
_____no_output_____
###Markdown
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text)
text = (re.sub('[\W]+', ' ', text.lower()) +
' '.join(emoticons).replace('-', ''))
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
###Output
_____no_output_____
###Markdown
**Important Note about `n_jobs`**Please note that it is highly recommended to use `n_jobs=-1` (instead of `n_jobs=1`) in the previous code example to utilize all available cores on your machine and speed up the grid search. However, some Windows users reported issues when running the previous code with the `n_jobs=-1` setting related to pickling the tokenizer and tokenizer_porter functions for multiprocessing on Windows. Another workaround would be to replace those two functions, `[tokenizer, tokenizer_porter]`, with `[str.split]`. However, note that the replacement by the simple `str.split` would not support stemming. **Important Note about the running time**Executing the following code cell **may take up to 30-60 min** depending on your machine, since based on the parameter grid we defined, there are 2*2*2*3*5 + 2*2*2*3*5 = 240 models to fit.If you do not wish to wait so long, you could reduce the size of the dataset by decreasing the number of training samples, for example, as follows: X_train = df.loc[:2500, 'review'].values y_train = df.loc[:2500, 'sentiment'].values However, note that decreasing the training set size to such a small number will likely result in poorly performing models. Alternatively, you can delete parameters from the grid above to reduce the number of models to fit -- for example, by using the following: param_grid = [{'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0]}, ]
###Code
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to generate more
## "logging" output when this notebook is run
## on the Travis Continuous Integration
## platform to test the code as well as
## speeding up the run using a smaller
## dataset for debugging
if 'TRAVIS' in os.environ:
gs_lr_tfidf.verbose=2
X_train = df.loc[:250, 'review'].values
y_train = df.loc[:250, 'sentiment'].values
X_test = df.loc[25000:25250, 'review'].values
y_test = df.loc[25000:25250, 'sentiment'].values
print(X_train)
print(y_train)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
_____no_output_____
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.linear_model import LogisticRegression
import numpy as np
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
_____no_output_____
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
###Output
_____no_output_____
###Markdown
**Note**- You can replace `Perceptron(n_iter, ...)` by `Perceptron(max_iter, ...)` in scikit-learn >= 0.19. The `n_iter` parameter is used here deriberately, because some people still use scikit-learn 0.18.
###Code
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
Topic modeling Decomposing text documents with Latent Dirichlet Allocation Latent Dirichlet Allocation with scikit-learn
###Code
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to create a smaller dataset if
## the notebook is run on the Travis Continuous Integration
## platform to test the code on a smaller dataset
## to prevent timeout errors and just serves a debugging tool
## for this notebook
if 'TRAVIS' in os.environ:
df.loc[:500].to_csv('movie_data.csv')
df = pd.read_csv('movie_data.csv', nrows=500)
print('SMALL DATA SUBSET CREATED FOR TESTING')
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=.1,
max_features=5000)
X = count.fit_transform(df['review'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_topics=10,
random_state=123,
learning_method='batch')
X_topics = lda.fit_transform(X)
lda.components_.shape
n_top_words = 5
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print("Topic %d:" % (topic_idx + 1))
print(" ".join([feature_names[i]
for i in topic.argsort()\
[:-n_top_words - 1:-1]]))
###Output
_____no_output_____
###Markdown
Based on reading the 5 most important words for each topic, we may guess that the LDA identified the following topics: 1. Generally bad movies (not really a topic category)2. Movies about families3. War movies4. Art movies5. Crime movies6. Horror movies7. Comedies8. Movies somehow related to TV shows9. Movies based on books10. Action movies To confirm that the categories make sense based on the reviews, let's plot 5 movies from the horror movie category (category 6 at index position 5):
###Code
horror = X_topics[:, 5].argsort()[::-1]
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\nHorror movie #%d:' % (iter_idx + 1))
print(df['review'][movie_idx][:300], '...')
###Output
_____no_output_____
###Markdown
Using the preceeding code example, we printed the first 300 characters from the top 3 horror movies and indeed, we can see that the reviews -- even though we don't know which exact movie they belong to -- sound like reviews of horror movies, indeed. (However, one might argue that movie 2 could also belong to topic category 1.) Summary ... ---Readers may ignore the next cell.
###Code
! python ../.convert_notebook_to_script.py --input ch08.ipynb --output ch08.py
###Output
_____no_output_____
###Markdown
*Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-editionCode License: [MIT License](https://github.com/rasbt/python-machine-learning-book-2nd-edition/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a "Sebastian Raschka" -u -d -v -p numpy,pandas,sklearn,nltk
###Output
Sebastian Raschka
last updated: 2017-04-13
CPython 3.6.0
IPython 5.3.0
numpy 1.12.1
pandas 0.19.2
sklearn 0.18.1
nltk 3.2.2
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Preparing the IMDb movie review data for text processing](Preparing-the-IMDb-movie-review-data-for-text-processing) - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset) - [Preprocessing the movie dataset into more convenient format](Preprocessing-the-movie-dataset-into-more-convenient-format)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Topic modeling](Topic-modeling) - [Decomposing text documents with Latent Dirichlet Allocation](Decomposing-text-documents-with-Latent-Dirichlet-Allocation) - [Latent Dirichlet Allocation with scikit-learn](Latent-Dirichlet-Allocation-with-scikit-learn)- [Summary](Summary) Preparing the IMDb movie review data for text processing Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. **Optional code to download and unzip the dataset via Python:**
###Code
import os
import sys
import tarfile
source = 'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
target = 'aclImdb_v1.tar.gz'
if not os.path.isdir('aclImdb'):
if (sys.version_info < (3, 0)):
import urllib
urllib.urlretrieve(source, target)
else:
import urllib.request
urllib.request.urlretrieve(source, target)
with tarfile.open(target, 'r:gz') as tar:
tar.extractall()
###Output
_____no_output_____
###Markdown
Preprocessing the movie dataset into more convenient format
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
basepath = 'aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file),
'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]],
ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% 100%
[##############################] | ETA: 00:00:00
Total time elapsed: 00:01:28
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('movie_data.csv')
df.head(3)
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv` file in the previous chapter, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book-2nd-edition/tree/master/code/ch08/
###Code
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to load a smaller dataset if
## the notebook is run on the Travis Continuous Integration
## platform to test the code on a smaller dataset
## to prevent timeout errors and just serves a debugging tool
if 'TRAVIS' in os.environ:
df = pd.read_csv('movie_data.csv', nrows=500)
###Output
_____no_output_____
###Markdown
Introducing the bag-of-words model ... Transforming documents into feature vectors By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
###Code
print(count.vocabulary_)
###Output
{'the': 6, 'sun': 4, 'is': 1, 'shining': 3, 'weather': 8, 'sweet': 5, 'and': 0, 'one': 2, 'two': 7}
###Markdown
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$Here the tf(t, d) is the term frequency that we introduced in the previous section,and the inverse document frequency *idf(t, d)* can be calculated as:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$where $n_d$ is the total number of documents, and *df(d, t)* is the number of documents *d* that contain the term *t*. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,
norm='l2',
smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs))
.toarray())
###Output
[[ 0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[ 0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[ 0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are: $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$To make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text)
text = (re.sub('[\W]+', ' ', text.lower()) +
' '.join(emoticons).replace('-', ''))
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
###Output
_____no_output_____
###Markdown
**Important Note**Please note that it is highly recommended to use `n_jobs=-1` (instead of `n_jobs=1`) in the previous code example to utilize all available cores on your machine and speed up the grid search. However, some Windows users reported issues when running the previous code with the `n_jobs=-1` setting related to pickling the tokenizer and tokenizer_porter functions for multiprocessing on Windows. Another workaround would be to replace those two functions, `[tokenizer, tokenizer_porter]`, with `[str.split]`. However, note that the replacement by the simple str.split would not support stemming.
###Code
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.899
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.linear_model import LogisticRegression
import numpy as np
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.400000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.200000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
Topic modeling Decomposing text documents with Latent Dirichlet Allocation Latent Dirichlet Allocation with scikit-learn
###Code
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=.1,
max_features=5000)
X = count.fit_transform(df['review'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_topics=10,
random_state=123,
learning_method='batch')
X_topics = lda.fit_transform(X)
lda.components_.shape
n_top_words = 5
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print("Topic %d:" % (topic_idx + 1))
print(" ".join([feature_names[i]
for i in topic.argsort()\
[:-n_top_words - 1:-1]]))
###Output
Topic 1:
worst minutes awful script stupid
Topic 2:
family mother father children girl
Topic 3:
american war dvd music tv
Topic 4:
human audience cinema art sense
Topic 5:
police guy car dead murder
Topic 6:
horror house sex girl woman
Topic 7:
role performance comedy actor performances
Topic 8:
series episode war episodes tv
Topic 9:
book version original read novel
Topic 10:
action fight guy guys cool
###Markdown
Based on reading the 5 most important words for each topic, we may guess that the LDA identified the following topics: 1. Generally bad movies (not really a topic category)2. Movies about families3. War movies4. Art movies5. Crime movies6. Horror movies7. Comedies8. Movies somehow related to TV shows9. Movies based on books10. Action movies To confirm that the categories make sense based on the reviews, let's plot 5 movies from the horror movie category (category 6 at index position 5):
###Code
horror = X_topics[:, 5].argsort()[::-1]
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\nHorror movie #%d:' % (iter_idx + 1))
print(df['review'][movie_idx][:300], '...')
###Output
Horror movie #1:
House of Dracula works from the same basic premise as House of Frankenstein from the year before; namely that Universal's three most famous monsters; Dracula, Frankenstein's Monster and The Wolf Man are appearing in the movie together. Naturally, the film is rather messy therefore, but the fact that ...
Horror movie #2:
Okay, what the hell kind of TRASH have I been watching now? "The Witches' Mountain" has got to be one of the most incoherent and insane Spanish exploitation flicks ever and yet, at the same time, it's also strangely compelling. There's absolutely nothing that makes sense here and I even doubt there ...
Horror movie #3:
<br /><br />Horror movie time, Japanese style. Uzumaki/Spiral was a total freakfest from start to finish. A fun freakfest at that, but at times it was a tad too reliant on kitsch rather than the horror. The story is difficult to summarize succinctly: a carefree, normal teenage girl starts coming fac ...
###Markdown
*Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-editionCode License: [MIT License](https://github.com/rasbt/python-machine-learning-book-2nd-edition/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a "Sebastian Raschka" -u -d -v -p numpy,pandas,sklearn,nltk
###Output
Sebastian Raschka
last updated: 2017-09-02
CPython 3.6.1
IPython 6.1.0
numpy 1.12.1
pandas 0.20.3
sklearn 0.19.0
nltk 3.2.4
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Preparing the IMDb movie review data for text processing](Preparing-the-IMDb-movie-review-data-for-text-processing) - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset) - [Preprocessing the movie dataset into more convenient format](Preprocessing-the-movie-dataset-into-more-convenient-format)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Topic modeling](Topic-modeling) - [Decomposing text documents with Latent Dirichlet Allocation](Decomposing-text-documents-with-Latent-Dirichlet-Allocation) - [Latent Dirichlet Allocation with scikit-learn](Latent-Dirichlet-Allocation-with-scikit-learn)- [Summary](Summary) Preparing the IMDb movie review data for text processing Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. **Optional code to download and unzip the dataset via Python:**
###Code
import os
import sys
import tarfile
import time
source = 'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
target = 'aclImdb_v1.tar.gz'
def reporthook(count, block_size, total_size):
global start_time
if count == 0:
start_time = time.time()
return
duration = time.time() - start_time
progress_size = int(count * block_size)
speed = progress_size / (1024.**2 * duration)
percent = count * block_size * 100. / total_size
sys.stdout.write("\r%d%% | %d MB | %.2f MB/s | %d sec elapsed" %
(percent, progress_size / (1024.**2), speed, duration))
sys.stdout.flush()
if not os.path.isdir('aclImdb') and not os.path.isfile('aclImdb_v1.tar.gz'):
if (sys.version_info < (3, 0)):
import urllib
urllib.urlretrieve(source, target, reporthook)
else:
import urllib.request
urllib.request.urlretrieve(source, target, reporthook)
if not os.path.isdir('aclImdb'):
with tarfile.open(target, 'r:gz') as tar:
tar.extractall()
###Output
_____no_output_____
###Markdown
Preprocessing the movie dataset into more convenient format
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
basepath = 'aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file),
'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]],
ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% [##############################] 100% | ETA: 00:00:00
Total time elapsed: 00:02:21
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('movie_data.csv', index=False, encoding='utf-8')
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv` file in the previous chapter, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book-2nd-edition/tree/master/code/ch08/ Introducing the bag-of-words model ... Transforming documents into feature vectors By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
###Code
print(count.vocabulary_)
###Output
{'the': 6, 'sun': 4, 'is': 1, 'shining': 3, 'weather': 8, 'sweet': 5, 'and': 0, 'one': 2, 'two': 7}
###Markdown
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$Here the tf(t, d) is the term frequency that we introduced in the previous section,and the inverse document frequency *idf(t, d)* can be calculated as:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$where $n_d$ is the total number of documents, and *df(d, t)* is the number of documents *d* that contain the term *t*. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,
norm='l2',
smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs))
.toarray())
###Output
[[ 0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[ 0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[ 0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are: $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$To make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text)
text = (re.sub('[\W]+', ' ', text.lower()) +
' '.join(emoticons).replace('-', ''))
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
###Output
_____no_output_____
###Markdown
**Important Note about `n_jobs`**Please note that it is highly recommended to use `n_jobs=-1` (instead of `n_jobs=1`) in the previous code example to utilize all available cores on your machine and speed up the grid search. However, some Windows users reported issues when running the previous code with the `n_jobs=-1` setting related to pickling the tokenizer and tokenizer_porter functions for multiprocessing on Windows. Another workaround would be to replace those two functions, `[tokenizer, tokenizer_porter]`, with `[str.split]`. However, note that the replacement by the simple `str.split` would not support stemming. **Important Note about the running time**Executing the following code cell **may take up to 30-60 min** depending on your machine, since based on the parameter grid we defined, there are 2*2*2*3*5 + 2*2*2*3*5 = 240 models to fit.If you do not wish to wait so long, you could reduce the size of the dataset by decreasing the number of training samples, for example, as follows: X_train = df.loc[:2500, 'review'].values y_train = df.loc[:2500, 'sentiment'].values However, note that decreasing the training set size to such a small number will likely result in poorly performing models. Alternatively, you can delete parameters from the grid above to reduce the number of models to fit -- for example, by using the following: param_grid = [{'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0]}, ]
###Code
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to generate more
## "logging" output when this notebook is run
## on the Travis Continuous Integration
## platform to test the code as well as
## speeding up the run using a smaller
## dataset for debugging
if 'TRAVIS' in os.environ:
gs_lr_tfidf.verbose=2
X_train = df.loc[:250, 'review'].values
y_train = df.loc[:250, 'sentiment'].values
X_test = df.loc[25000:25250, 'review'].values
y_test = df.loc[25000:25250, 'sentiment'].values
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.899
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.linear_model import LogisticRegression
import numpy as np
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.400000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.200000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
# This cell is not contained in the book but
# added for convenience so that the notebook
# can be executed starting here, without
# executing prior code in this notebook
import os
import gzip
if not os.path.isfile('movie_data.csv'):
if not os.path.isfile('movie_data.csv.gz'):
print('Please place a copy of the movie_data.csv.gz'
'in this directory. You can obtain it by'
'a) executing the code in the beginning of this'
'notebook or b) by downloading it from GitHub:'
'https://github.com/rasbt/python-machine-learning-'
'book-2nd-edition/blob/master/code/ch08/movie_data.csv.gz')
else:
with in_f = gzip.open('movie_data.csv.gz', 'rb'), \
out_f = open('movie_data.csv', 'wb'):
out_f.write(in_f.read())
import numpy as np
import re
from nltk.corpus import stopwords
# The `stop` is defined as earlier in this chapter
# Added it here for convenience, so that this section
# can be run as standalone without executing prior code
# in the directory
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
###Output
_____no_output_____
###Markdown
**Note**- You can replace `Perceptron(n_iter, ...)` by `Perceptron(max_iter, ...)` in scikit-learn >= 0.19.
###Code
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
if Version(sklearn_version) < '0.18':
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
else:
clf = SGDClassifier(loss='log', random_state=1, max_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
Topic modeling Decomposing text documents with Latent Dirichlet Allocation Latent Dirichlet Allocation with scikit-learn
###Code
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to create a smaller dataset if
## the notebook is run on the Travis Continuous Integration
## platform to test the code on a smaller dataset
## to prevent timeout errors and just serves a debugging tool
## for this notebook
if 'TRAVIS' in os.environ:
df.loc[:500].to_csv('movie_data.csv')
df = pd.read_csv('movie_data.csv', nrows=500)
print('SMALL DATA SUBSET CREATED FOR TESTING')
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=.1,
max_features=5000)
X = count.fit_transform(df['review'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_topics=10,
random_state=123,
learning_method='batch')
X_topics = lda.fit_transform(X)
lda.components_.shape
n_top_words = 5
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print("Topic %d:" % (topic_idx + 1))
print(" ".join([feature_names[i]
for i in topic.argsort()\
[:-n_top_words - 1:-1]]))
###Output
Topic 1:
worst minutes awful script stupid
Topic 2:
family mother father children girl
Topic 3:
american war dvd music tv
Topic 4:
human audience cinema art sense
Topic 5:
police guy car dead murder
Topic 6:
horror house sex girl woman
Topic 7:
role performance comedy actor performances
Topic 8:
series episode war episodes tv
Topic 9:
book version original read novel
Topic 10:
action fight guy guys cool
###Markdown
Based on reading the 5 most important words for each topic, we may guess that the LDA identified the following topics: 1. Generally bad movies (not really a topic category)2. Movies about families3. War movies4. Art movies5. Crime movies6. Horror movies7. Comedies8. Movies somehow related to TV shows9. Movies based on books10. Action movies To confirm that the categories make sense based on the reviews, let's plot 5 movies from the horror movie category (category 6 at index position 5):
###Code
horror = X_topics[:, 5].argsort()[::-1]
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\nHorror movie #%d:' % (iter_idx + 1))
print(df['review'][movie_idx][:300], '...')
###Output
Horror movie #1:
House of Dracula works from the same basic premise as House of Frankenstein from the year before; namely that Universal's three most famous monsters; Dracula, Frankenstein's Monster and The Wolf Man are appearing in the movie together. Naturally, the film is rather messy therefore, but the fact that ...
Horror movie #2:
Okay, what the hell kind of TRASH have I been watching now? "The Witches' Mountain" has got to be one of the most incoherent and insane Spanish exploitation flicks ever and yet, at the same time, it's also strangely compelling. There's absolutely nothing that makes sense here and I even doubt there ...
Horror movie #3:
<br /><br />Horror movie time, Japanese style. Uzumaki/Spiral was a total freakfest from start to finish. A fun freakfest at that, but at times it was a tad too reliant on kitsch rather than the horror. The story is difficult to summarize succinctly: a carefree, normal teenage girl starts coming fac ...
###Markdown
Using the preceeding code example, we printed the first 300 characters from the top 3 horror movies and indeed, we can see that the reviews -- even though we don't know which exact movie they belong to -- sound like reviews of horror movies, indeed. (However, one might argue that movie 2 could also belong to topic category 1.) Summary ... ---Readers may ignore the next cell.
###Code
! python ../.convert_notebook_to_script.py --input ch08.ipynb --output ch08.py
###Output
[NbConvertApp] Converting notebook ch08.ipynb to script
[NbConvertApp] Writing 11500 bytes to ch08.txt
###Markdown
8장. 감성 분석에 머신 러닝 적용하기 **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.** 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기 `watermark`는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다. `watermark` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
###Code
#!pip install watermark
%load_ext watermark
%watermark -u -d -v -p numpy,pandas,sklearn,nltk
###Output
last updated: 2020-05-22
CPython 3.7.3
IPython 7.5.0
numpy 1.18.4
pandas 1.0.3
sklearn 0.23.1
nltk 3.4.1
###Markdown
텍스트 처리용 IMDb 영화 리뷰 데이터 준비 영화 리뷰 데이터셋 구하기 IMDB 영화 리뷰 데이터셋은 [http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz](http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz)에서 내려받을 수 있습니다. 다운로드된 후 파일 압축을 해제합니다.A) 리눅스(Linux)나 macOS를 사용하면 새로운 터미널(Terminal) 윈도우를 열고 `cd` 명령으로 다운로드 디렉터리로 이동하여 다음 명령을 실행하세요. `tar -zxf aclImdb_v1.tar.gz`B) 윈도(Windows)를 사용하면 7Zip(http://www.7-zip.org) 같은 무료 압축 유틸리티를 설치하여 다운로드한 파일의 압축을 풀 수 있습니다. **코랩이나 리눅스에서 직접 다운로드하려면 다음 셀의 주석을 제거하고 실행하세요.**
###Code
#!wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
###Output
_____no_output_____
###Markdown
**다음처럼 파이썬에서 직접 압축을 풀 수도 있습니다:**
###Code
import os
import tarfile
if not os.path.isdir('aclImdb'):
with tarfile.open('aclImdb_v1.tar.gz', 'r:gz') as tar:
tar.extractall()
###Output
_____no_output_____
###Markdown
영화 리뷰 데이터셋을 더 간편한 형태로 전처리하기 `pyprind`는 주피터 노트북에서 진행바를 출력하기 위한 유틸리티입니다. `pyprind` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
###Code
#!pip install pyprind
import pyprind
import pandas as pd
import os
# `basepath`를 압축 해제된 영화 리뷰 데이터셋이 있는
# 디렉토리로 바꾸세요
basepath = 'aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in sorted(os.listdir(path)):
with open(os.path.join(path, file),
'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]],
ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% [##############################] 100% | ETA: 00:00:00
Total time elapsed: 00:01:21
###Markdown
데이터프레임을 섞습니다:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
선택사항: 만들어진 데이터를 CSV 파일로 저장합니다:
###Code
df.to_csv('movie_data.csv', index=False, encoding='utf-8')
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
df.shape
###Output
_____no_output_____
###Markdown
BoW 모델 소개 단어를 특성 벡터로 변환하기 CountVectorizer의 fit_transform 메서드를 호출하여 BoW 모델의 어휘사전을 만들고 다음 세 문장을 희소한 특성 벡터로 변환합니다:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
어휘 사전의 내용을 출력해 보면 BoW 모델의 개념을 이해하는 데 도움이 됩니다:
###Code
print(count.vocabulary_)
###Output
{'the': 6, 'sun': 4, 'is': 1, 'shining': 3, 'weather': 8, 'sweet': 5, 'and': 0, 'one': 2, 'two': 7}
###Markdown
이전 결과에서 볼 수 있듯이 어휘 사전은 고유 단어와 정수 인덱스가 매핑된 파이썬 딕셔너리에 저장되어 있습니다. 그다음 만들어진 특성 벡터를 출력해 봅시다: 특성 벡터의 각 인덱스는 CountVectorizer의 어휘 사전 딕셔너리에 저장된 정수 값에 해당됩니다. 예를 들어 인덱스 0에 있는 첫 번째 특성은 ‘and’ 단어의 카운트를 의미합니다. 이 단어는 마지막 문서에만 나타나네요. 인덱스 1에 있는 (특성 벡터의 두 번째 열) 단어 ‘is’는 세 문장에 모두 등장합니다. 특성 벡터의 이런 값들을 단어 빈도(term frequency) 라고도 부릅니다. 문서 d에 등장한 단어 t의 횟수를 *tf (t,d)*와 같이 씁니다.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
tf-idf를 사용해 단어 적합성 평가하기
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
텍스트 데이터를 분석할 때 클래스 레이블이 다른 문서에 같은 단어들이 나타나는 경우를 종종 보게 됩니다. 일반적으로 자주 등장하는 단어는 유용하거나 판별에 필요한 정보를 가지고 있지 않습니다. 이 절에서 특성 벡터에서 자주 등장하는 단어의 가중치를 낮추는 기법인 tf-idf(term frequency-inverse document frequency)에 대해 배우겠습니다. tf-idf는 단어 빈도와 역문서 빈도(inverse document frequency)의 곱으로 정의됩니다:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$여기에서 tf(t, d)는 이전 절에서 보았던 단어 빈도입니다. *idf(t, d)*는 역문서 빈도로 다음과 같이 계산합니다:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$여기에서 $n_d$는 전체 문서 개수이고 *df(d, t)*는 단어 t가 포함된 문서 d의 개수입니다. 분모에 상수 1을 추가하는 것은 선택 사항입니다. 훈련 샘플에 한 번도 등장하지 않는 단어가 있는 경우 분모가 0이 되지 않게 만듭니다. log는 문서 빈도 *df(d, t)*가 낮을 때 역문서 빈도 값이 너무 커지지 않도록 만듭니다.사이킷런 라이브러리에는 `CountVectorizer` 클래스에서 만든 단어 빈도를 입력받아 tf-idf로 변환하는 `TfidfTransformer` 클래스가 구현되어 있습니다:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,
norm='l2',
smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs))
.toarray())
###Output
[[0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
이전 절에서 보았듯이 세 번째 문서에서 단어 ‘is’가 가장 많이 나타났기 때문에 단어 빈도가 가장 컸습니다. 동일한 특성 벡터를 tf-idf로 변환하면 단어 ‘is’는 비교적 작은 tf-idf를 가집니다(0.45). 이 단어는 첫 번째와 두 번째 문서에도 나타나므로 판별에 유용한 정보를 가지고 있지 않을 것입니다. 수동으로 특성 벡터에 있는 각 단어의 tf-idf를 계산해 보면 `TfidfTransformer`가 앞서 정의한 표준 tf-idf 공식과 조금 다르게 계산한다는 것을 알 수 있습니다. 사이킷런에 구현된 역문서 빈도 공식은 다음과 같습니다. $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$비슷하게 사이킷런에서 계산하는 tf-idf는 앞서 정의한 공식과 조금 다릅니다:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$일반적으로 tf-idf를 계산하기 전에 단어 빈도(tf)를 정규화하지만 `TfidfTransformer` 클래스는 tf-idf를 직접 정규화합니다. 사이킷런의 `TfidfTransformer`는 기본적으로 L2 정규화를 적용합니다(norm=’l2’). 정규화되지 않은 특성 벡터 v를 L2-노름으로 나누면 길이가 1인 벡터가 반환됩니다:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$TfidfTransformer의 작동 원리를 이해하기 위해 세 번째 문서에 있는 단어 ‘is'의 tf-idf를 예로 들어 계산해 보죠.세 번째 문서에서 단어 ‘is’의 단어 빈도는 3입니다(tf=3). 이 단어는 세 개 문서에 모두 나타나기 때문에 문서 빈도가 3입니다(df=3). 따라서 역문서 빈도는 다음과 같이 계산됩니다:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$이제 tf-idf를 계산하기 위해 역문서 빈도에 1을 더하고 단어 빈도를 곱합니다:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
세 번째 문서에 있는 모든 단어에 대해 이런 계산을 반복하면 tf-idf 벡터 [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0, 1.69, 1.29]를 얻습니다. 이 특성 벡터의 값은 앞서 사용했던 TfidfTransformer에서 얻은 값과 다릅니다. tf-idf 계산에서 빠트린 마지막 단계는 다음과 같은 L2-정규화입니다:: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ 결과에서 보듯이 사이킷런의 `TfidfTransformer`에서 반환된 결과와 같아졌습니다. tf-idf 계산 방법을 이해했으므로 다음 절로 넘어가 이 개념을 영화 리뷰 데이터셋에 적용해 보죠.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
텍스트 데이터 정제
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text)
text = (re.sub('[\W]+', ' ', text.lower()) +
' '.join(emoticons).replace('-', ''))
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
df['review'].map(preprocessor)
###Output
_____no_output_____
###Markdown
문서를 토큰으로 나누기
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
문서 분류를 위한 로지스틱 회귀 모델 훈련하기
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(solver='liblinear', random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=1)
###Output
_____no_output_____
###Markdown
**`n_jobs` 매개변수에 대하여**앞의 코드 예제에서 컴퓨터에 있는 모든 CPU 코어를 사용해 그리드 서치의 속도를 높이려면 (`n_jobs=1` 대신) `n_jobs=-1`로 지정하는 것이 좋습니다. 일부 시스템에서는 멀티프로세싱을 위해 `n_jobs=-1`로 지정할 때 `tokenizer` 와 `tokenizer_porter` 함수의 직렬화에 문제가 발생할 수 있습니다. 이런 경우 `[tokenizer, tokenizer_porter]`를 `[str.split]`로 바꾸어 문제를 해결할 수 있습니다. 다만 `str.split`로 바꾸면 어간 추출을 하지 못합니다. **코드 실행 시간에 대하여**다음 코드 셀을 실행하면 시스템에 따라 **30~60분 정도 걸릴 수 있습니다**. 매개변수 그리드에서 정의한 대로 2*2*2*3*5 + 2*2*2*3*5 = 240개의 모델을 훈련하기 때문입니다.**코랩을 사용할 경우에도 CPU 코어가 많지 않기 때문에 실행 시간이 오래 걸릴 수 있습니다.**너무 오래 기다리기 어렵다면 데이터셋의 훈련 샘플의 수를 다음처럼 줄일 수 있습니다: X_train = df.loc[:2500, 'review'].values y_train = df.loc[:2500, 'sentiment'].values 훈련 세트 크기를 줄이는 것은 모델 성능을 감소시킵니다. 그리드에 지정한 매개변수를 삭제하면 훈련한 모델 수를 줄일 수 있습니다. 예를 들면 다음과 같습니다: param_grid = [{'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0]}, ]
###Code
gs_lr_tfidf.fit(X_train, y_train)
print('최적의 매개변수 조합: %s ' % gs_lr_tfidf.best_params_)
print('CV 정확도: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('테스트 정확도: %.3f' % clf.score(X_test, y_test))
###Output
테스트 정확도: 0.899
###Markdown
대용량 데이터 처리-온라인 알고리즘과 외부 메모리 학습
###Code
# 이 셀의 코드는 책에 포함되어 있지 않습니다. This cell is not contained in the book but
# 이전 코드를 실행하지 않고 바로 시작할 수 있도록 편의를 위해 추가했습니다.
import os
import gzip
if not os.path.isfile('movie_data.csv'):
if not os.path.isfile('movie_data.csv.gz'):
print('Please place a copy of the movie_data.csv.gz'
'in this directory. You can obtain it by'
'a) executing the code in the beginning of this'
'notebook or b) by downloading it from GitHub:'
'https://github.com/rasbt/python-machine-learning-'
'book-2nd-edition/blob/master/code/ch08/movie_data.csv.gz')
else:
in_f = gzip.open('movie_data.csv.gz', 'rb')
out_f = open('movie_data.csv', 'wb')
out_f.write(in_f.read())
in_f.close()
out_f.close()
import numpy as np
import re
from nltk.corpus import stopwords
# `stop` 객체를 앞에서 정의했지만 이전 코드를 실행하지 않고
# 편의상 여기에서부터 코드를 실행하기 위해 다시 만듭니다.
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # 헤더 넘기기
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
pass
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
clf = SGDClassifier(loss='log', random_state=1, max_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('정확도: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
토픽 모델링 사이킷런의 LDA
###Code
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=.1,
max_features=5000)
X = count.fit_transform(df['review'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_components=10,
random_state=123,
learning_method='batch')
X_topics = lda.fit_transform(X)
lda.components_.shape
n_top_words = 5
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print("토픽 %d:" % (topic_idx + 1))
print(" ".join([feature_names[i]
for i in topic.argsort()\
[:-n_top_words - 1:-1]]))
###Output
토픽 1:
worst minutes awful comedy money
토픽 2:
sex women woman men black
토픽 3:
comedy musical music song dance
토픽 4:
family father mother wife son
토픽 5:
war book american documentary japanese
토픽 6:
episode guy series house girl
토픽 7:
role performance actor john book
토픽 8:
war music beautiful cinema history
토픽 9:
horror budget gore killer effects
토픽 10:
action original series animation disney
###Markdown
각 토픽에서 가장 중요한 단어 다섯 개를 기반으로 LDA가 다음 토픽을 구별했다고 추측할 수 있습니다.1. 대체적으로 형편없는 영화(실제 토픽 카테고리가 되지 못함)2. 가족 영화3. 전쟁 영화4. 예술 영화5. 범죄 영화6. 공포 영화7. 코미디 영화8. TV 쇼와 관련된 영화9. 소설을 원작으로 한 영화10. 액션 영화 카테고리가 잘 선택됐는지 확인하기 위해 공포 영화 카테고리에서 3개 영화의 리뷰를 출력해 보죠(공포 영화는 카테고리 6이므로 인덱스는 5입니다):
###Code
horror = X_topics[:, 5].argsort()[::-1]
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\n공포 영화 #%d:' % (iter_idx + 1))
print(df['review'][movie_idx][:300], '...')
###Output
공포 영화 #1:
This is one of the funniest movies I've ever saw. A 14-year old boy Jason Shepherd wrote an English paper called "Big Fat Liar". When his skateboard was taken, he had to use his sister's bike to get to the college on time and he hit a limo. When he went into the limo, he met a famous producer from H ...
공포 영화 #2:
Where to start? Some guy has some Indian pot that he's cleaning, and suddenly Skeletor attacks. He hits a woman in the neck with an axe, she falls down, but then gets up and is apparently uninjured. She runs into the woods, and it turns out there's the basement of a shopping center out there in the ...
공포 영화 #3:
***SPOILERS*** ***SPOILERS*** Some bunch of Afrikkaner-Hillbilly types are out in the desert looking for Diamonds when they find a hard mound in the middle of a sandy desert area. Spoilers: The dumbest one starts hitting the mound with a pick, and cracks it open. Then he looks into the hole and stic ...
###Markdown
8장. 감성 분석에 머신 러닝 적용하기 **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.** 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기 `watermark`는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다. `watermark` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
###Code
#!pip install watermark
%load_ext watermark
%watermark -u -d -v -p numpy,pandas,sklearn,nltk
###Output
last updated: 2019-04-26
CPython 3.7.3
IPython 7.4.0
numpy 1.16.3
pandas 0.24.2
sklearn 0.20.3
nltk 3.4.1
###Markdown
텍스트 처리용 IMDb 영화 리뷰 데이터 준비 영화 리뷰 데이터셋 구하기 IMDB 영화 리뷰 데이터셋은 [http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz](http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz)에서 내려받을 수 있습니다. 다운로드된 후 파일 압축을 해제합니다.A) 리눅스(Linux)나 macOS를 사용하면 새로운 터미널(Terminal) 윈도우를 열고 `cd` 명령으로 다운로드 디렉터리로 이동하여 다음 명령을 실행하세요. `tar -zxf aclImdb_v1.tar.gz`B) 윈도(Windows)를 사용하면 7Zip(http://www.7-zip.org) 같은 무료 압축 유틸리티를 설치하여 다운로드한 파일의 압축을 풀 수 있습니다. **코랩이나 리눅스에서 직접 다운로드하려면 다음 셀의 주석을 제거하고 실행하세요.**
###Code
#!wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
###Output
_____no_output_____
###Markdown
**다음처럼 파이썬에서 직접 압축을 풀 수도 있습니다:**
###Code
import os
import tarfile
if not os.path.isdir('aclImdb'):
with tarfile.open('aclImdb_v1.tar.gz', 'r:gz') as tar:
tar.extractall()
###Output
_____no_output_____
###Markdown
영화 리뷰 데이터셋을 더 간편한 형태로 전처리하기 `pyprind`는 주피터 노트북에서 진행바를 출력하기 위한 유틸리티입니다. `pyprind` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
###Code
#!pip install pyprind
import pyprind
import pandas as pd
import os
# `basepath`를 압축 해제된 영화 리뷰 데이터셋이 있는
# 디렉토리로 바꾸세요
basepath = 'aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in sorted(os.listdir(path)):
with open(os.path.join(path, file),
'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]],
ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% [##############################] 100% | ETA: 00:00:00
Total time elapsed: 00:01:32
###Markdown
데이터프레임을 섞습니다:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
선택사항: 만들어진 데이터를 CSV 파일로 저장합니다:
###Code
df.to_csv('movie_data.csv', index=False, encoding='utf-8')
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
df.shape
###Output
_____no_output_____
###Markdown
BoW 모델 소개 단어를 특성 벡터로 변환하기 CountVectorizer의 fit_transform 메서드를 호출하여 BoW 모델의 어휘사전을 만들고 다음 세 문장을 희소한 특성 벡터로 변환합니다:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
어휘 사전의 내용을 출력해 보면 BoW 모델의 개념을 이해하는 데 도움이 됩니다:
###Code
print(count.vocabulary_)
###Output
{'the': 6, 'sun': 4, 'is': 1, 'shining': 3, 'weather': 8, 'sweet': 5, 'and': 0, 'one': 2, 'two': 7}
###Markdown
이전 결과에서 볼 수 있듯이 어휘 사전은 고유 단어와 정수 인덱스가 매핑된 파이썬 딕셔너리에 저장되어 있습니다. 그다음 만들어진 특성 벡터를 출력해 봅시다: 특성 벡터의 각 인덱스는 CountVectorizer의 어휘 사전 딕셔너리에 저장된 정수 값에 해당됩니다. 예를 들어 인덱스 0에 있는 첫 번째 특성은 ‘and’ 단어의 카운트를 의미합니다. 이 단어는 마지막 문서에만 나타나네요. 인덱스 1에 있는 (특성 벡터의 두 번째 열) 단어 ‘is’는 세 문장에 모두 등장합니다. 특성 벡터의 이런 값들을 단어 빈도(term frequency) 라고도 부릅니다. 문서 d에 등장한 단어 t의 횟수를 *tf (t,d)*와 같이 씁니다.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
tf-idf를 사용해 단어 적합성 평가하기
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
텍스트 데이터를 분석할 때 클래스 레이블이 다른 문서에 같은 단어들이 나타나는 경우를 종종 보게 됩니다. 일반적으로 자주 등장하는 단어는 유용하거나 판별에 필요한 정보를 가지고 있지 않습니다. 이 절에서 특성 벡터에서 자주 등장하는 단어의 가중치를 낮추는 기법인 tf-idf(term frequency-inverse document frequency)에 대해 배우겠습니다. tf-idf는 단어 빈도와 역문서 빈도(inverse document frequency)의 곱으로 정의됩니다:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$여기에서 tf(t, d)는 이전 절에서 보았던 단어 빈도입니다. *idf(t, d)*는 역문서 빈도로 다음과 같이 계산합니다:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$여기에서 $n_d$는 전체 문서 개수이고 *df(d, t)*는 단어 t가 포함된 문서 d의 개수입니다. 분모에 상수 1을 추가하는 것은 선택 사항입니다. 훈련 샘플에 한 번도 등장하지 않는 단어가 있는 경우 분모가 0이 되지 않게 만듭니다. log는 문서 빈도 *df(d, t)*가 낮을 때 역문서 빈도 값이 너무 커지지 않도록 만듭니다.사이킷런 라이브러리에는 `CountVectorizer` 클래스에서 만든 단어 빈도를 입력받아 tf-idf로 변환하는 `TfidfTransformer` 클래스가 구현되어 있습니다:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,
norm='l2',
smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs))
.toarray())
###Output
[[0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
이전 절에서 보았듯이 세 번째 문서에서 단어 ‘is’가 가장 많이 나타났기 때문에 단어 빈도가 가장 컸습니다. 동일한 특성 벡터를 tf-idf로 변환하면 단어 ‘is’는 비교적 작은 tf-idf를 가집니다(0.45). 이 단어는 첫 번째와 두 번째 문서에도 나타나므로 판별에 유용한 정보를 가지고 있지 않을 것입니다. 수동으로 특성 벡터에 있는 각 단어의 tf-idf를 계산해 보면 `TfidfTransformer`가 앞서 정의한 표준 tf-idf 공식과 조금 다르게 계산한다는 것을 알 수 있습니다. 사이킷런에 구현된 역문서 빈도 공식은 다음과 같습니다. $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$비슷하게 사이킷런에서 계산하는 tf-idf는 앞서 정의한 공식과 조금 다릅니다:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$일반적으로 tf-idf를 계산하기 전에 단어 빈도(tf)를 정규화하지만 `TfidfTransformer` 클래스는 tf-idf를 직접 정규화합니다. 사이킷런의 `TfidfTransformer`는 기본적으로 L2 정규화를 적용합니다(norm=’l2’). 정규화되지 않은 특성 벡터 v를 L2-노름으로 나누면 길이가 1인 벡터가 반환됩니다:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$TfidfTransformer의 작동 원리를 이해하기 위해 세 번째 문서에 있는 단어 ‘is'의 tf-idf를 예로 들어 계산해 보죠.세 번째 문서에서 단어 ‘is’의 단어 빈도는 3입니다(tf=3). 이 단어는 세 개 문서에 모두 나타나기 때문에 문서 빈도가 3입니다(df=3). 따라서 역문서 빈도는 다음과 같이 계산됩니다:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$이제 tf-idf를 계산하기 위해 역문서 빈도에 1을 더하고 단어 빈도를 곱합니다:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
세 번째 문서에 있는 모든 단어에 대해 이런 계산을 반복하면 tf-idf 벡터 [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0, 1.69, 1.29]를 얻습니다. 이 특성 벡터의 값은 앞서 사용했던 TfidfTransformer에서 얻은 값과 다릅니다. tf-idf 계산에서 빠트린 마지막 단계는 다음과 같은 L2-정규화입니다:: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ 결과에서 보듯이 사이킷런의 `TfidfTransformer`에서 반환된 결과와 같아졌습니다. tf-idf 계산 방법을 이해했으므로 다음 절로 넘어가 이 개념을 영화 리뷰 데이터셋에 적용해 보죠.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
텍스트 데이터 정제
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text)
text = (re.sub('[\W]+', ' ', text.lower()) +
' '.join(emoticons).replace('-', ''))
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
df['review'].map(preprocessor)
###Output
_____no_output_____
###Markdown
문서를 토큰으로 나누기
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
문서 분류를 위한 로지스틱 회귀 모델 훈련하기
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(solver='liblinear', random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=1)
###Output
_____no_output_____
###Markdown
**`n_jobs` 매개변수에 대하여**앞의 코드 예제에서 컴퓨터에 있는 모든 CPU 코어를 사용해 그리드 서치의 속도를 높이려면 (`n_jobs=1` 대신) `n_jobs=-1`로 지정하는 것이 좋습니다. 일부 시스템에서는 멀티프로세싱을 위해 `n_jobs=-1`로 지정할 때 `tokenizer` 와 `tokenizer_porter` 함수의 직렬화에 문제가 발생할 수 있습니다. 이런 경우 `[tokenizer, tokenizer_porter]`를 `[str.split]`로 바꾸어 문제를 해결할 수 있습니다. 다만 `str.split`로 바꾸면 어간 추출을 하지 못합니다. **코드 실행 시간에 대하여**다음 코드 셀을 실행하면 시스템에 따라 **30~60분 정도 걸릴 수 있습니다**. 매개변수 그리드에서 정의한 대로 2*2*2*3*5 + 2*2*2*3*5 = 240개의 모델을 훈련하기 때문입니다.**코랩을 사용할 경우에도 CPU 코어가 많지 않기 때문에 실행 시간이 오래 걸릴 수 있습니다.**너무 오래 기다리기 어렵다면 데이터셋의 훈련 샘플의 수를 다음처럼 줄일 수 있습니다: X_train = df.loc[:2500, 'review'].values y_train = df.loc[:2500, 'sentiment'].values 훈련 세트 크기를 줄이는 것은 모델 성능을 감소시킵니다. 그리드에 지정한 매개변수를 삭제하면 훈련한 모델 수를 줄일 수 있습니다. 예를 들면 다음과 같습니다: param_grid = [{'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0]}, ]
###Code
gs_lr_tfidf.fit(X_train, y_train)
print('최적의 매개변수 조합: %s ' % gs_lr_tfidf.best_params_)
print('CV 정확도: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('테스트 정확도: %.3f' % clf.score(X_test, y_test))
###Output
테스트 정확도: 0.899
###Markdown
대용량 데이터 처리-온라인 알고리즘과 외부 메모리 학습
###Code
# 이 셀의 코드는 책에 포함되어 있지 않습니다. This cell is not contained in the book but
# 이전 코드를 실행하지 않고 바로 시작할 수 있도록 편의를 위해 추가했습니다.
import os
import gzip
if not os.path.isfile('movie_data.csv'):
if not os.path.isfile('movie_data.csv.gz'):
print('Please place a copy of the movie_data.csv.gz'
'in this directory. You can obtain it by'
'a) executing the code in the beginning of this'
'notebook or b) by downloading it from GitHub:'
'https://github.com/rasbt/python-machine-learning-'
'book-2nd-edition/blob/master/code/ch08/movie_data.csv.gz')
else:
in_f = gzip.open('movie_data.csv.gz', 'rb')
out_f = open('movie_data.csv', 'wb')
out_f.write(in_f.read())
in_f.close()
out_f.close()
import numpy as np
import re
from nltk.corpus import stopwords
# `stop` 객체를 앞에서 정의했지만 이전 코드를 실행하지 않고
# 편의상 여기에서부터 코드를 실행하기 위해 다시 만듭니다.
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # 헤더 넘기기
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
pass
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
clf = SGDClassifier(loss='log', random_state=1, max_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('정확도: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
토픽 모델링 사이킷런의 LDA
###Code
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=.1,
max_features=5000)
X = count.fit_transform(df['review'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_components=10,
random_state=123,
learning_method='batch')
X_topics = lda.fit_transform(X)
lda.components_.shape
n_top_words = 5
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print("토픽 %d:" % (topic_idx + 1))
print(" ".join([feature_names[i]
for i in topic.argsort()\
[:-n_top_words - 1:-1]]))
###Output
토픽 1:
worst minutes awful script stupid
토픽 2:
family mother father children girl
토픽 3:
american war dvd music tv
토픽 4:
human audience cinema art sense
토픽 5:
police guy car dead murder
토픽 6:
horror house sex girl woman
토픽 7:
role performance comedy actor performances
토픽 8:
series episode war episodes tv
토픽 9:
book version original read novel
토픽 10:
action fight guy guys cool
###Markdown
각 토픽에서 가장 중요한 단어 다섯 개를 기반으로 LDA가 다음 토픽을 구별했다고 추측할 수 있습니다.1. 대체적으로 형편없는 영화(실제 토픽 카테고리가 되지 못함)2. 가족 영화3. 전쟁 영화4. 예술 영화5. 범죄 영화6. 공포 영화7. 코미디 영화8. TV 쇼와 관련된 영화9. 소설을 원작으로 한 영화10. 액션 영화 카테고리가 잘 선택됐는지 확인하기 위해 공포 영화 카테고리에서 3개 영화의 리뷰를 출력해 보죠(공포 영화는 카테고리 6이므로 인덱스는 5입니다):
###Code
horror = X_topics[:, 5].argsort()[::-1]
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\n공포 영화 #%d:' % (iter_idx + 1))
print(df['review'][movie_idx][:300], '...')
###Output
공포 영화 #1:
House of Dracula works from the same basic premise as House of Frankenstein from the year before; namely that Universal's three most famous monsters; Dracula, Frankenstein's Monster and The Wolf Man are appearing in the movie together. Naturally, the film is rather messy therefore, but the fact that ...
공포 영화 #2:
Okay, what the hell kind of TRASH have I been watching now? "The Witches' Mountain" has got to be one of the most incoherent and insane Spanish exploitation flicks ever and yet, at the same time, it's also strangely compelling. There's absolutely nothing that makes sense here and I even doubt there ...
공포 영화 #3:
<br /><br />Horror movie time, Japanese style. Uzumaki/Spiral was a total freakfest from start to finish. A fun freakfest at that, but at times it was a tad too reliant on kitsch rather than the horror. The story is difficult to summarize succinctly: a carefree, normal teenage girl starts coming fac ...
###Markdown
*Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-editionCode License: [MIT License](https://github.com/rasbt/python-machine-learning-book-2nd-edition/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a "Sebastian Raschka" -u -d -v -p numpy,pandas,sklearn,nltk
###Output
Sebastian Raschka
last updated: 2018-07-02
CPython 3.6.5
IPython 6.4.0
numpy 1.14.5
pandas 0.23.1
sklearn 0.19.1
nltk 3.3
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Preparing the IMDb movie review data for text processing](Preparing-the-IMDb-movie-review-data-for-text-processing) - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset) - [Preprocessing the movie dataset into more convenient format](Preprocessing-the-movie-dataset-into-more-convenient-format)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Topic modeling](Topic-modeling) - [Decomposing text documents with Latent Dirichlet Allocation](Decomposing-text-documents-with-Latent-Dirichlet-Allocation) - [Latent Dirichlet Allocation with scikit-learn](Latent-Dirichlet-Allocation-with-scikit-learn)- [Summary](Summary) Preparing the IMDb movie review data for text processing Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. **Optional code to download and unzip the dataset via Python:**
###Code
import os
import sys
import tarfile
import time
source = 'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
target = 'aclImdb_v1.tar.gz'
def reporthook(count, block_size, total_size):
global start_time
if count == 0:
start_time = time.time()
return
duration = time.time() - start_time
progress_size = int(count * block_size)
speed = progress_size / (1024.**2 * duration)
percent = count * block_size * 100. / total_size
sys.stdout.write("\r%d%% | %d MB | %.2f MB/s | %d sec elapsed" %
(percent, progress_size / (1024.**2), speed, duration))
sys.stdout.flush()
if not os.path.isdir('aclImdb') and not os.path.isfile('aclImdb_v1.tar.gz'):
if (sys.version_info < (3, 0)):
import urllib
urllib.urlretrieve(source, target, reporthook)
else:
import urllib.request
urllib.request.urlretrieve(source, target, reporthook)
if not os.path.isdir('aclImdb'):
with tarfile.open(target, 'r:gz') as tar:
tar.extractall()
###Output
_____no_output_____
###Markdown
Preprocessing the movie dataset into more convenient format
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
basepath = 'aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in sorted(os.listdir(path)):
with open(os.path.join(path, file),
'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]],
ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% [##############################] 100% | ETA: 00:00:00
Total time elapsed: 00:04:19
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('movie_data.csv', index=False, encoding='utf-8')
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
df.shape
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv`, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book-2nd-edition/tree/master/code/ch08/ Introducing the bag-of-words model ... Transforming documents into feature vectors By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
###Code
print(count.vocabulary_)
###Output
{'the': 6, 'sun': 4, 'is': 1, 'shining': 3, 'weather': 8, 'sweet': 5, 'and': 0, 'one': 2, 'two': 7}
###Markdown
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$Here the tf(t, d) is the term frequency that we introduced in the previous section,and the inverse document frequency *idf(t, d)* can be calculated as:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$where $n_d$ is the total number of documents, and *df(d, t)* is the number of documents *d* that contain the term *t*. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,
norm='l2',
smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs))
.toarray())
###Output
[[ 0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[ 0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[ 0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are: $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$To make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text)
text = (re.sub('[\W]+', ' ', text.lower()) +
' '.join(emoticons).replace('-', ''))
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
###Output
_____no_output_____
###Markdown
**Important Note about `n_jobs`**Please note that it is highly recommended to use `n_jobs=-1` (instead of `n_jobs=1`) in the previous code example to utilize all available cores on your machine and speed up the grid search. However, some Windows users reported issues when running the previous code with the `n_jobs=-1` setting related to pickling the tokenizer and tokenizer_porter functions for multiprocessing on Windows. Another workaround would be to replace those two functions, `[tokenizer, tokenizer_porter]`, with `[str.split]`. However, note that the replacement by the simple `str.split` would not support stemming. **Important Note about the running time**Executing the following code cell **may take up to 30-60 min** depending on your machine, since based on the parameter grid we defined, there are 2*2*2*3*5 + 2*2*2*3*5 = 240 models to fit.If you do not wish to wait so long, you could reduce the size of the dataset by decreasing the number of training samples, for example, as follows: X_train = df.loc[:2500, 'review'].values y_train = df.loc[:2500, 'sentiment'].values However, note that decreasing the training set size to such a small number will likely result in poorly performing models. Alternatively, you can delete parameters from the grid above to reduce the number of models to fit -- for example, by using the following: param_grid = [{'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0]}, ]
###Code
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to generate more
## "logging" output when this notebook is run
## on the Travis Continuous Integration
## platform to test the code as well as
## speeding up the run using a smaller
## dataset for debugging
if 'TRAVIS' in os.environ:
gs_lr_tfidf.verbose=2
X_train = df.loc[:250, 'review'].values
y_train = df.loc[:250, 'sentiment'].values
X_test = df.loc[25000:25250, 'review'].values
y_test = df.loc[25000:25250, 'sentiment'].values
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.899
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.linear_model import LogisticRegression
import numpy as np
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.400000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.200000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
# This cell is not contained in the book but
# added for convenience so that the notebook
# can be executed starting here, without
# executing prior code in this notebook
import os
import gzip
if not os.path.isfile('movie_data.csv'):
if not os.path.isfile('movie_data.csv.gz'):
print('Please place a copy of the movie_data.csv.gz'
'in this directory. You can obtain it by'
'a) executing the code in the beginning of this'
'notebook or b) by downloading it from GitHub:'
'https://github.com/rasbt/python-machine-learning-'
'book-2nd-edition/blob/master/code/ch08/movie_data.csv.gz')
else:
with in_f = gzip.open('movie_data.csv.gz', 'rb'), \
out_f = open('movie_data.csv', 'wb'):
out_f.write(in_f.read())
import numpy as np
import re
from nltk.corpus import stopwords
# The `stop` is defined as earlier in this chapter
# Added it here for convenience, so that this section
# can be run as standalone without executing prior code
# in the directory
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
###Output
_____no_output_____
###Markdown
**Note**- You can replace `Perceptron(n_iter, ...)` by `Perceptron(max_iter, ...)` in scikit-learn >= 0.19.
###Code
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
if Version(sklearn_version) < '0.18':
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
else:
clf = SGDClassifier(loss='log', random_state=1, max_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
Topic modeling Decomposing text documents with Latent Dirichlet Allocation Latent Dirichlet Allocation with scikit-learn
###Code
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to create a smaller dataset if
## the notebook is run on the Travis Continuous Integration
## platform to test the code on a smaller dataset
## to prevent timeout errors and just serves a debugging tool
## for this notebook
if 'TRAVIS' in os.environ:
df.loc[:500].to_csv('movie_data.csv')
df = pd.read_csv('movie_data.csv', nrows=500)
print('SMALL DATA SUBSET CREATED FOR TESTING')
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=.1,
max_features=5000)
X = count.fit_transform(df['review'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_topics=10,
random_state=123,
learning_method='batch')
X_topics = lda.fit_transform(X)
lda.components_.shape
n_top_words = 5
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print("Topic %d:" % (topic_idx + 1))
print(" ".join([feature_names[i]
for i in topic.argsort()\
[:-n_top_words - 1:-1]]))
###Output
Topic 1:
worst minutes awful script stupid
Topic 2:
family mother father children girl
Topic 3:
american war dvd music tv
Topic 4:
human audience cinema art sense
Topic 5:
police guy car dead murder
Topic 6:
horror house sex girl woman
Topic 7:
role performance comedy actor performances
Topic 8:
series episode war episodes tv
Topic 9:
book version original read novel
Topic 10:
action fight guy guys cool
###Markdown
Based on reading the 5 most important words for each topic, we may guess that the LDA identified the following topics: 1. Generally bad movies (not really a topic category)2. Movies about families3. War movies4. Art movies5. Crime movies6. Horror movies7. Comedies8. Movies somehow related to TV shows9. Movies based on books10. Action movies To confirm that the categories make sense based on the reviews, let's plot 5 movies from the horror movie category (category 6 at index position 5):
###Code
horror = X_topics[:, 5].argsort()[::-1]
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\nHorror movie #%d:' % (iter_idx + 1))
print(df['review'][movie_idx][:300], '...')
###Output
Horror movie #1:
House of Dracula works from the same basic premise as House of Frankenstein from the year before; namely that Universal's three most famous monsters; Dracula, Frankenstein's Monster and The Wolf Man are appearing in the movie together. Naturally, the film is rather messy therefore, but the fact that ...
Horror movie #2:
Okay, what the hell kind of TRASH have I been watching now? "The Witches' Mountain" has got to be one of the most incoherent and insane Spanish exploitation flicks ever and yet, at the same time, it's also strangely compelling. There's absolutely nothing that makes sense here and I even doubt there ...
Horror movie #3:
<br /><br />Horror movie time, Japanese style. Uzumaki/Spiral was a total freakfest from start to finish. A fun freakfest at that, but at times it was a tad too reliant on kitsch rather than the horror. The story is difficult to summarize succinctly: a carefree, normal teenage girl starts coming fac ...
###Markdown
Using the preceeding code example, we printed the first 300 characters from the top 3 horror movies and indeed, we can see that the reviews -- even though we don't know which exact movie they belong to -- sound like reviews of horror movies, indeed. (However, one might argue that movie 2 could also belong to topic category 1.) Summary ... ---Readers may ignore the next cell.
###Code
! python ../.convert_notebook_to_script.py --input ch08.ipynb --output ch08.py
###Output
[NbConvertApp] Converting notebook ch08.ipynb to script
[NbConvertApp] Writing 11500 bytes to ch08.txt
###Markdown
*Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017Original Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-editionAdapted for this course: https://github.com/trungngv/python-machine-learning-book-2nd-edition/blob/master/code/ch08/ch08.ipynbCode License: [MIT License](https://github.com/rasbt/python-machine-learning-book-2nd-edition/blob/master/LICENSE.txt) Week 5 - Applying Machine Learning To Sentiment Analysis Overview - [The movie reviews dataset](The-IMDB-Moview-Review-dataset)- [Solving problems with machine learning](Solving-problems-with-machine-learning) - [Supervised learning](Supervise-learning) - [Text classification](Text-classification)- [Transforming documents into feature vectors](Transforming-documents-into-feature-vectors) - [Word indicators representation](Word-indicators-representation) - [One-hot-encoding representation](One-hot-encoding-representation) - [Bag-of-words representation](Bag-of-words-representation) - [Tfidf representation](Tfidf)- [Creating a modelling dataset](Creating-a-modelling-dataset) - [Cleaning text data](Cleaning-text-data)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Summary](Summary) The IMDB Moview Review dataset **The task: predict sentiment of movie reviews.** The original IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).You can download the dataset from Github or if you cloned this repository, you should have the file and don't have to do anything.
###Code
import pandas as pd
pd.set_option('display.max_colwidth', -1)
###Output
_____no_output_____
###Markdown
Read the data in with Pandas. Note that Pandas can read compressed file automatically.
###Code
df = pd.read_csv('movie_data.csv.gz', encoding='utf-8')
###Output
_____no_output_____
###Markdown
What do you notice from the example reviews?
###Code
print(df.shape)
df.head(3)
###Output
(50000, 2)
###Markdown
Solving problems with machine learning Mapping problems to machine learning solutionshttp://scikit-learn.org/stable/tutorial/machine_learning_map/index.html Supervised learningThe goal of supervised learning is to learn a function that maps an input to an output based on example input-output pairs. Examples:- House properties => Pricepostcode | land size (sqm) | bedrooms | bathrooms | dist 2 station (m) | price (millions)---|---|---|---|---|2000|1000|4|2|200|2M2000|500|4|2|200|1.5M2100|1000|3|1|1000|0.4M- News => Topicnews headline (Vietnamese) | topic--|--Jonh McCain - Thượng nghị sĩ nhiều duyên nợ với Việt Nam | PoliticsThí sinh Hoa hậu Việt Nam diễn bikini | EntertainmentSyria tập kín trước khi quyết đấu Việt Nam | Sports / Politics? Text classification- Input is text (can be a document, a sentence of varied length)- Output is categorical -- binary classification if two categories, multi-class if multiple categories- Is one instance of supervised learning- Is also one of natural language processing (NLP) tasks Examples- Categorization of news articles into defined topics- Understanding audience sentiment from social media- Detection of spam and non-spam emails- Auto tagging of customer queries- Categorization of research papers into research topics Main steps- **Feature engineering**: Transform text input into a numeric feature vector (i.e. vectorize a document); features can also be categorical- **Modelling**: Train classification models as a standard classification problem with numeric features Transforming documents into feature vectorsSuppose we have these 2 documents. How would you convert them into feature vectors? - The sun is shining- The weather is not sweet- 사랑 해요First, think of an approach to work with 2 documents. Then consider if it works for a large number of documents.In principles:- Represent a word by a numerical encoding (there are many different ways).- Create the document's vector by combining the word encodings.- Each document should be represented by vector of same length. Why? Word indicators representation Make all documents have a fixed same length. Longer documents are trimmed and shorter documents are padded. - The sun is shining DUMMY- The weather is not sweet Padded documents:word1 | word2 | word3 | word4 | word5--|--|--|the | sun | is | shining | DUMMY the | weather | is | not | sweet Then replace each word with its index (the = 1, sun = 2, is = 3, shining = 4, weather = 5, not = 6, sweet = 7, DUMMY = 8)Vectorized documents:id | word1 | word2 | word3 | word4 | word5--|--|--|--|--|doc1 | 1 | 2 | 3 | 4 | 8doc2 | 1 | 5 | 3 | 6 | 7 Exercise- Does this vectorization work?- What are the problems? One-hot-encoding representation the = [1 0 0 0 0 0]sun = [0 1 0 0 0 0]is = [0 0 1 0 0 0]shining = [0 0 0 1 0 0]weather = [0 0 0 0 1 0]sweet = [0 0 0 0 0 1]The sun is shining = [1 1 1 1 0 0]Is the sun shining = []? Bag-of-words representation By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
###Code
print(count.vocabulary_)
###Output
{'the': 6, 'sun': 4, 'is': 1, 'shining': 3, 'weather': 8, 'sweet': 5, 'and': 0, 'one': 2, 'two': 7}
###Markdown
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
Exercise - Which featurization seems to be better than the previous model? How? Tfidf
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$Here the tf(t, d) is the term frequency that we introduced in the previous section,and the inverse document frequency *idf(t, d)* can be calculated as:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$where $n_d$ is the total number of documents, and *df(d, t)* is the number of documents *d* that contain the term *t*. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,
norm='l2',
smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs))
.toarray())
###Output
[[0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are: $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$To make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Creating a modelling dataset Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text)
text = (re.sub('[\W]+', ' ', text.lower()) +
' '.join(emoticons).replace('-', ''))
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Training a logistic regression model for text classification
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
###Output
_____no_output_____
###Markdown
Tokenization and Stemming
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Model training and hyperparameters search
###Code
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import RandomizedSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None,
stop_words='english')
param_grid = {
'vect__ngram_range': [(1, 1)],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf': [False],
'vect__norm': [None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]
}
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
rs_lr_tfidf = RandomizedSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=3,
n_iter=10,
verbose=1,
n_jobs=-1)
###Output
_____no_output_____
###Markdown
**Important Note about `n_jobs`**Please note that it is highly recommended to use `n_jobs=-1` (instead of `n_jobs=1`) in the previous code example to utilize all available cores on your machine and speed up the grid search. However, some Windows users reported issues when running the previous code with the `n_jobs=-1` setting related to pickling the tokenizer and tokenizer_porter functions for multiprocessing on Windows. Another workaround would be to replace those two functions, `[tokenizer, tokenizer_porter]`, with `[str.split]`. However, note that the replacement by the simple `str.split` would not support stemming.
###Code
rs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % rs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % rs_lr_tfidf.best_score_)
clf = rs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.882
###Markdown
Working with bigger data - online algorithms and out-of-core learning
###Code
# This cell is not contained in the book but
# added for convenience so that the notebook
# can be executed starting here, without
# executing prior code in this notebook
import os
import gzip
if not os.path.isfile('movie_data.csv'):
if not os.path.isfile('movie_data.csv.gz'):
print('Please place a copy of the movie_data.csv.gz'
'in this directory. You can obtain it by'
'a) executing the code in the beginning of this'
'notebook or b) by downloading it from GitHub:'
'https://github.com/rasbt/python-machine-learning-'
'book-2nd-edition/blob/master/code/ch08/movie_data.csv.gz')
else:
with in_f = gzip.open('movie_data.csv.gz', 'rb'), \
out_f = open('movie_data.csv', 'wb'):
out_f.write(in_f.read())
import numpy as np
import re
from nltk.corpus import stopwords
# The `stop` is defined as earlier in this chapter
# Added it here for convenience, so that this section
# can be run as standalone without executing prior code
# in the directory
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
###Output
_____no_output_____
###Markdown
**Note**- You can replace `Perceptron(n_iter, ...)` by `Perceptron(max_iter, ...)` in scikit-learn >= 0.19.
###Code
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
if Version(sklearn_version) < '0.18':
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
else:
clf = SGDClassifier(loss='log', random_state=1, max_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Accuracy: 0.867
###Markdown
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)https://github.com/rasbt/python-machine-learning-book[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn,nltk
###Output
The watermark extension is already loaded. To reload it, use:
%reload_ext watermark
Sebastian Raschka
last updated: 2016-06-30
CPython 3.5.1
IPython 4.2.0
numpy 1.11.0
pandas 0.18.1
matplotlib 1.5.1
scikit-learn 0.17.1
nltk 3.2.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Summary](Summary) Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. Compatibility Note:I received an email from a reader who was having troubles with reading the movie review texts due to encoding issues. Typically, Python's default encoding is set to `'utf-8'`, which shouldn't cause troubles when running this IPython notebook. You can simply check the encoding on your machine by firing up a new Python interpreter from the command line terminal and execute >>> import sys >>> sys.getdefaultencoding() If the returned result is **not** `'utf-8'`, you probably need to change your Python's encoding to `'utf-8'`, for example by typing `export PYTHONIOENCODING=utf8` in your terminal shell prior to running this IPython notebook. (Note that this is a temporary change, and it needs to be executed in the same shell that you'll use to launch `ipython notebook`.Alternatively, you can replace the lines with open(os.path.join(path, file), 'r') as infile: ... pd.read_csv('./movie_data.csv') ... df.to_csv('./movie_data.csv', index=False)by with open(os.path.join(path, file), 'r', encoding='utf-8') as infile: ... pd.read_csv('./movie_data.csv', encoding='utf-8') ... df.to_csv('./movie_data.csv', index=False, encoding='utf-8') in the following cells to achieve the desired effect.
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
#basepath = '/Users/Sebastian/Desktop/aclImdb/'
basepath = './aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% 100%
[##############################] | ETA: 00:00:00
Total time elapsed: 00:06:23
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('./movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(3)
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv` file in the previous chapter, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book/tree/master/code/datasets/movie Introducing the bag-of-words model ... Transforming documents into feature vectors By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
###Code
print(count.vocabulary_)
###Output
{'sun': 4, 'and': 0, 'is': 1, 'the': 6, 'shining': 3, 'two': 7, 'sweet': 5, 'weather': 8, 'one': 2}
###Markdown
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$Here the tf(t, d) is the term frequency that we introduced in the previous section,and the inverse document frequency *idf(t, d)* can be calculated as:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$where $n_d$ is the total number of documents, and *df(d, t)* is the number of documents *d* that contain the term *t*. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
###Output
[[ 0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[ 0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[ 0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are: $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$To make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.899
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.cross_validation import StratifiedKFold, cross_val_score
from sklearn.linear_model import LogisticRegression
import numpy as np
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
from sklearn.grid_search import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.400000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.200000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
*Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-editionCode License: [MIT License](https://github.com/rasbt/python-machine-learning-book-2nd-edition/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a "Sebastian Raschka" -u -d -v -p numpy,pandas,sklearn,nltk
###Output
Sebastian Raschka
last updated: 2018-07-02
CPython 3.6.5
IPython 6.4.0
numpy 1.14.5
pandas 0.23.1
sklearn 0.19.1
nltk 3.3
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Preparing the IMDb movie review data for text processing](Preparing-the-IMDb-movie-review-data-for-text-processing) - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset) - [Preprocessing the movie dataset into more convenient format](Preprocessing-the-movie-dataset-into-more-convenient-format)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Topic modeling](Topic-modeling) - [Decomposing text documents with Latent Dirichlet Allocation](Decomposing-text-documents-with-Latent-Dirichlet-Allocation) - [Latent Dirichlet Allocation with scikit-learn](Latent-Dirichlet-Allocation-with-scikit-learn)- [Summary](Summary) Preparing the IMDb movie review data for text processing Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. **Optional code to download and unzip the dataset via Python:**
###Code
import os
import sys
import tarfile
import time
source = 'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
target = 'aclImdb_v1.tar.gz'
def reporthook(count, block_size, total_size):
global start_time
if count == 0:
start_time = time.time()
return
duration = time.time() - start_time
progress_size = int(count * block_size)
speed = progress_size / (1024.**2 * duration)
percent = count * block_size * 100. / total_size
sys.stdout.write("\r%d%% | %d MB | %.2f MB/s | %d sec elapsed" %
(percent, progress_size / (1024.**2), speed, duration))
sys.stdout.flush()
if not os.path.isdir('aclImdb') and not os.path.isfile('aclImdb_v1.tar.gz'):
if (sys.version_info < (3, 0)):
import urllib
urllib.urlretrieve(source, target, reporthook)
else:
import urllib.request
urllib.request.urlretrieve(source, target, reporthook)
if not os.path.isdir('aclImdb'):
with tarfile.open(target, 'r:gz') as tar:
tar.extractall()
###Output
_____no_output_____
###Markdown
Preprocessing the movie dataset into more convenient format
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
basepath = 'aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in sorted(os.listdir(path)):
with open(os.path.join(path, file),
'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]],
ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% [##############################] 100% | ETA: 00:00:00
Total time elapsed: 00:00:28
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('movie_data.csv', index=False, encoding='utf-8')
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
df.shape
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv`, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book-2nd-edition/tree/master/code/ch08/ Introducing the bag-of-words model ... Transforming documents into feature vectors By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
###Code
print(count.vocabulary_)
###Output
{'the': 6, 'sun': 4, 'is': 1, 'shining': 3, 'weather': 8, 'sweet': 5, 'and': 0, 'one': 2, 'two': 7}
###Markdown
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$Here the tf(t, d) is the term frequency that we introduced in the previous section,and the inverse document frequency *idf(t, d)* can be calculated as:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$where $n_d$ is the total number of documents, and *df(d, t)* is the number of documents *d* that contain the term *t*. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,
norm='l2',
smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs))
.toarray())
###Output
[[0. 0.43370786 0. 0.55847784 0.55847784 0.
0.43370786 0. 0. ]
[0. 0.43370786 0. 0. 0. 0.55847784
0.43370786 0. 0.55847784]
[0.50238645 0.44507629 0.50238645 0.19103892 0.19103892 0.19103892
0.29671753 0.25119322 0.19103892]]
###Markdown
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are: $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$To make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text)
text = (re.sub('[\W]+', ' ', text.lower()) +
' '.join(emoticons).replace('-', ''))
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
df['review']
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
###Output
_____no_output_____
###Markdown
**Important Note about `n_jobs`**Please note that it is highly recommended to use `n_jobs=-1` (instead of `n_jobs=1`) in the previous code example to utilize all available cores on your machine and speed up the grid search. However, some Windows users reported issues when running the previous code with the `n_jobs=-1` setting related to pickling the tokenizer and tokenizer_porter functions for multiprocessing on Windows. Another workaround would be to replace those two functions, `[tokenizer, tokenizer_porter]`, with `[str.split]`. However, note that the replacement by the simple `str.split` would not support stemming. **Important Note about the running time**Executing the following code cell **may take up to 30-60 min** depending on your machine, since based on the parameter grid we defined, there are 2*2*2*3*5 + 2*2*2*3*5 = 240 models to fit.If you do not wish to wait so long, you could reduce the size of the dataset by decreasing the number of training samples, for example, as follows: X_train = df.loc[:2500, 'review'].values y_train = df.loc[:2500, 'sentiment'].values However, note that decreasing the training set size to such a small number will likely result in poorly performing models. Alternatively, you can delete parameters from the grid above to reduce the number of models to fit -- for example, by using the following: param_grid = [{'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0]}, ]
###Code
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to generate more
## "logging" output when this notebook is run
## on the Travis Continuous Integration
## platform to test the code as well as
## speeding up the run using a smaller
## dataset for debugging
if 'TRAVIS' in os.environ:
gs_lr_tfidf.verbose=2
X_train = df.loc[:250, 'review'].values
y_train = df.loc[:250, 'sentiment'].values
X_test = df.loc[25000:25250, 'review'].values
y_test = df.loc[25000:25250, 'sentiment'].values
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.899
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.linear_model import LogisticRegression
import numpy as np
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV 1/5] END ..................................., score=0.600 total time= 0.0s
[CV 2/5] END ..................................., score=0.400 total time= 0.0s
[CV 3/5] END ..................................., score=0.600 total time= 0.0s
[CV 4/5] END ..................................., score=0.200 total time= 0.0s
[CV 5/5] END ..................................., score=0.600 total time= 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
# This cell is not contained in the book but
# added for convenience so that the notebook
# can be executed starting here, without
# executing prior code in this notebook
import os
import gzip
if not os.path.isfile('movie_data.csv'):
if not os.path.isfile('movie_data.csv.gz'):
print('Please place a copy of the movie_data.csv.gz'
'in this directory. You can obtain it by'
'a) executing the code in the beginning of this'
'notebook or b) by downloading it from GitHub:'
'https://github.com/rasbt/python-machine-learning-'
'book-2nd-edition/blob/master/code/ch08/movie_data.csv.gz')
else:
with gzip.open('movie_data.csv.gz', 'rb') as in_f, \
open('movie_data.csv', 'wb') as out_f:
out_f.write(in_f.read())
import numpy as np
import re
from nltk.corpus import stopwords
# The `stop` is defined as earlier in this chapter
# Added it here for convenience, so that this section
# can be run as standalone without executing prior code
# in the directory
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
###Output
_____no_output_____
###Markdown
**Note**- You can replace `Perceptron(n_iter, ...)` by `Perceptron(max_iter, ...)` in scikit-learn >= 0.19.
###Code
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
if Version(sklearn_version) < '0.18':
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
else:
clf = SGDClassifier(loss='log', random_state=1, max_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
Topic modeling Decomposing text documents with Latent Dirichlet Allocation Latent Dirichlet Allocation with scikit-learn
###Code
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to create a smaller dataset if
## the notebook is run on the Travis Continuous Integration
## platform to test the code on a smaller dataset
## to prevent timeout errors and just serves a debugging tool
## for this notebook
if 'TRAVIS' in os.environ:
df.loc[:500].to_csv('movie_data.csv')
df = pd.read_csv('movie_data.csv', nrows=500)
print('SMALL DATA SUBSET CREATED FOR TESTING')
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=.1,
max_features=5000)
X = count.fit_transform(df['review'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_components=10,
random_state=123,
learning_method='batch')
X_topics = lda.fit_transform(X)
lda.components_.shape
n_top_words = 5
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print("Topic %d:" % (topic_idx + 1))
print(" ".join([feature_names[i]
for i in topic.argsort()\
[:-n_top_words - 1:-1]]))
###Output
Topic 1:
worst minutes awful script stupid
Topic 2:
family mother father children girl
Topic 3:
american war dvd music tv
Topic 4:
human audience cinema art sense
Topic 5:
police guy car dead murder
Topic 6:
horror house sex girl woman
Topic 7:
role performance comedy actor performances
Topic 8:
series episode war episodes tv
Topic 9:
book version original read novel
Topic 10:
action fight guy guys cool
###Markdown
Based on reading the 5 most important words for each topic, we may guess that the LDA identified the following topics: 1. Generally bad movies (not really a topic category)2. Movies about families3. War movies4. Art movies5. Crime movies6. Horror movies7. Comedies8. Movies somehow related to TV shows9. Movies based on books10. Action movies To confirm that the categories make sense based on the reviews, let's plot 5 movies from the horror movie category (category 6 at index position 5):
###Code
horror = X_topics[:, 5].argsort()[::-1]
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\nHorror movie #%d:' % (iter_idx + 1))
print(df['review'][movie_idx][:300], '...')
###Output
Horror movie #1:
House of Dracula works from the same basic premise as House of Frankenstein from the year before; namely that Universal's three most famous monsters; Dracula, Frankenstein's Monster and The Wolf Man are appearing in the movie together. Naturally, the film is rather messy therefore, but the fact that ...
Horror movie #2:
Okay, what the hell kind of TRASH have I been watching now? "The Witches' Mountain" has got to be one of the most incoherent and insane Spanish exploitation flicks ever and yet, at the same time, it's also strangely compelling. There's absolutely nothing that makes sense here and I even doubt there ...
Horror movie #3:
<br /><br />Horror movie time, Japanese style. Uzumaki/Spiral was a total freakfest from start to finish. A fun freakfest at that, but at times it was a tad too reliant on kitsch rather than the horror. The story is difficult to summarize succinctly: a carefree, normal teenage girl starts coming fac ...
###Markdown
Using the preceeding code example, we printed the first 300 characters from the top 3 horror movies and indeed, we can see that the reviews -- even though we don't know which exact movie they belong to -- sound like reviews of horror movies, indeed. (However, one might argue that movie 2 could also belong to topic category 1.) Summary ... ---Readers may ignore the next cell.
###Code
! python ../.convert_notebook_to_script.py --input ch08.ipynb --output ch08.py
###Output
[NbConvertApp] Converting notebook ch08.ipynb to script
[NbConvertApp] Writing 11500 bytes to ch08.txt
###Markdown
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)https://github.com/rasbt/python-machine-learning-book[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,sklearn,nltk
###Output
Sebastian Raschka
last updated: 2016-09-29
CPython 3.5.2
IPython 5.1.0
numpy 1.11.1
pandas 0.18.1
matplotlib 1.5.1
sklearn 0.18
nltk 3.2.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Summary](Summary)
###Code
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
###Output
_____no_output_____
###Markdown
Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. Compatibility Note:I received an email from a reader who was having troubles with reading the movie review texts due to encoding issues. Typically, Python's default encoding is set to `'utf-8'`, which shouldn't cause troubles when running this IPython notebook. You can simply check the encoding on your machine by firing up a new Python interpreter from the command line terminal and execute >>> import sys >>> sys.getdefaultencoding() If the returned result is **not** `'utf-8'`, you probably need to change your Python's encoding to `'utf-8'`, for example by typing `export PYTHONIOENCODING=utf8` in your terminal shell prior to running this IPython notebook. (Note that this is a temporary change, and it needs to be executed in the same shell that you'll use to launch `ipython notebook`.Alternatively, you can replace the lines with open(os.path.join(path, file), 'r') as infile: ... pd.read_csv('./movie_data.csv') ... df.to_csv('./movie_data.csv', index=False)by with open(os.path.join(path, file), 'r', encoding='utf-8') as infile: ... pd.read_csv('./movie_data.csv', encoding='utf-8') ... df.to_csv('./movie_data.csv', index=False, encoding='utf-8') in the following cells to achieve the desired effect.
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
#basepath = '/Users/Sebastian/Desktop/aclImdb/'
basepath = './aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% 100%
[##############################] | ETA: 00:00:00
Total time elapsed: 00:09:04
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('./movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(3)
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv` file in the previous chapter, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book/tree/master/code/datasets/movie Introducing the bag-of-words model ... Transforming documents into feature vectors By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
###Code
print(count.vocabulary_)
###Output
{'one': 2, 'sweet': 5, 'the': 6, 'shining': 3, 'weather': 8, 'and': 0, 'two': 7, 'is': 1, 'sun': 4}
###Markdown
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$Here the tf(t, d) is the term frequency that we introduced in the previous section,and the inverse document frequency *idf(t, d)* can be calculated as:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$where $n_d$ is the total number of documents, and *df(d, t)* is the number of documents *d* that contain the term *t*. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
###Output
[[ 0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[ 0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[ 0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are: $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$To make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
###Output
_____no_output_____
###Markdown
**Note:** Some readers [encountered problems](https://github.com/rasbt/python-machine-learning-book/issues/50) running the following code on Windows. Unfortunately, problems with multiprocessing on Windows are not uncommon. So, if the following code cell should result in issues on your machine, try setting `n_jobs=1` (instead of `n_jobs=-1` in the previous code cell).
###Code
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.899
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.linear_model import LogisticRegression
import numpy as np
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import cross_val_score
else:
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
if Version(sklearn_version) < '0.18':
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
else:
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.400000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.200000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
[Sebastian Raschka](http://sebastianraschka.com), 2015https://github.com/rasbt/python-machine-learning-book Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn,nltk
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
###Output
_____no_output_____
###Markdown
Overview - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Summary](Summary) Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. Compatibility Note:I received an email from a reader who was having troubles with reading the movie review texts due to encoding issues. Typically, Python's default encoding is set to `'utf-8'`, which shouldn't cause troubles when running this IPython notebook. You can simply check the encoding on your machine by firing up a new Python interpreter from the command line terminal and execute >>> import sys >>> sys.getdefaultencoding() If the returned result is **not** `'utf-8'`, you probably need to change your Python's encoding to `'utf-8'`, for example by typing `export PYTHONIOENCODING=utf8` in your terminal shell prior to running this IPython notebook. (Note that this is a temporary change, and it needs to be executed in the same shell that you'll use to launch `ipython notebook`.Alternatively, you can replace the lines with open(os.path.join(path, file), 'r') as infile: ... pd.read_csv('./movie_data.csv') ... df.to_csv('./movie_data.csv', index=False)by with open(os.path.join(path, file), 'r', encoding='utf-8') as infile: ... pd.read_csv('./movie_data.csv', encoding='utf-8') ... df.to_csv('./movie_data.csv', index=False, encoding='utf-8') in the following cells to achieve the desired effect.
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
#basepath = '/Users/Sebastian/Desktop/aclImdb/'
basepath = './aclImdb'
labels = {'pos':1, 'neg':0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% 100%
[##############################] | ETA: 00:00:00
Total time elapsed: 00:06:23
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('./movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(3)
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv` file in the previous chapter, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book/tree/master/code/datasets/movie Introducing the bag-of-words model ... Transforming documents into feature vectors
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining and the weather is sweet'])
bag = count.fit_transform(docs)
print(count.vocabulary_)
print(bag.toarray())
###Output
[[0 1 1 1 0 1 0]
[0 1 0 0 1 1 1]
[1 2 1 1 1 2 1]]
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
tf_is = 2
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1) )
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) + \
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:] if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1,1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1,1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5, verbose=1,
n_jobs=-1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.899
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.cross_validation import StratifiedKFold, cross_val_score
from sklearn.linear_model import LogisticRegression
import numpy as np
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
from sklearn.grid_search import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.400000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.200000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
import numpy as np
import re
from nltk.corpus import stopwords
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
[Sebastian Raschka](http://sebastianraschka.com), 2015https://github.com/rasbt/python-machine-learning-book Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn,nltk
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
###Output
_____no_output_____
###Markdown
Overview - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Summary](Summary) Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. Compatibility Note:I received an email from a reader who was having troubles with reading the movie review texts due to encoding issues. Typically, Python's default encoding is set to `'utf-8'`, which shouldn't cause troubles when running this IPython notebook. You can simply check the encoding on your machine by firing up a new Python interpreter from the command line terminal and execute >>> import sys >>> sys.getdefaultencoding() If the returned result is **not** `'utf-8'`, you probably need to change your Python's encoding to `'utf-8'`, for example by typing `export PYTHONIOENCODING=utf8` in your terminal shell prior to running this IPython notebook. (Note that this is a temporary change, and it needs to be executed in the same shell that you'll use to launch `ipython notebook`.Alternatively, you can replace the lines with open(os.path.join(path, file), 'r') as infile: ... pd.read_csv('./movie_data.csv') ... df.to_csv('./movie_data.csv', index=False)by with open(os.path.join(path, file), 'r', encoding='utf-8') as infile: ... pd.read_csv('./movie_data.csv', encoding='utf-8') ... df.to_csv('./movie_data.csv', index=False, encoding='utf-8') in the following cells to achieve the desired effect.
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
#basepath = '/Users/Sebastian/Desktop/aclImdb/'
basepath = './aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% 100%
[##############################] | ETA: 00:00:00
Total time elapsed: 00:06:23
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('./movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(3)
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv` file in the previous chapter, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book/tree/master/code/datasets/movie Introducing the bag-of-words model ... Transforming documents into feature vectors
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining and the weather is sweet'])
bag = count.fit_transform(docs)
print(count.vocabulary_)
print(bag.toarray())
###Output
[[0 1 1 1 0 1 0]
[0 1 0 0 1 1 1]
[1 2 1 1 1 2 1]]
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
tf_is = 2
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.899
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.cross_validation import StratifiedKFold, cross_val_score
from sklearn.linear_model import LogisticRegression
import numpy as np
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
from sklearn.grid_search import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.400000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.200000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
*Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-editionCode License: [MIT License](https://github.com/rasbt/python-machine-learning-book-2nd-edition/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a "Sebastian Raschka" -u -d -v -p numpy,pandas,sklearn,nltk
###Output
Sebastian Raschka
last updated: 2018-07-02
CPython 3.6.5
IPython 6.4.0
numpy 1.14.5
pandas 0.23.1
sklearn 0.19.1
nltk 3.3
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Preparing the IMDb movie review data for text processing](Preparing-the-IMDb-movie-review-data-for-text-processing) - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset) - [Preprocessing the movie dataset into more convenient format](Preprocessing-the-movie-dataset-into-more-convenient-format)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Topic modeling](Topic-modeling) - [Decomposing text documents with Latent Dirichlet Allocation](Decomposing-text-documents-with-Latent-Dirichlet-Allocation) - [Latent Dirichlet Allocation with scikit-learn](Latent-Dirichlet-Allocation-with-scikit-learn)- [Summary](Summary) Preparing the IMDb movie review data for text processing Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. **Optional code to download and unzip the dataset via Python:**
###Code
import os
import sys
import tarfile
import time
source = 'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
target = 'aclImdb_v1.tar.gz'
def reporthook(count, block_size, total_size):
global start_time
if count == 0:
start_time = time.time()
return
duration = time.time() - start_time
progress_size = int(count * block_size)
speed = progress_size / (1024.**2 * duration)
percent = count * block_size * 100. / total_size
sys.stdout.write("\r%d%% | %d MB | %.2f MB/s | %d sec elapsed" %
(percent, progress_size / (1024.**2), speed, duration))
sys.stdout.flush()
if not os.path.isdir('aclImdb') and not os.path.isfile('aclImdb_v1.tar.gz'):
if (sys.version_info < (3, 0)):
import urllib
urllib.urlretrieve(source, target, reporthook)
else:
import urllib.request
urllib.request.urlretrieve(source, target, reporthook)
if not os.path.isdir('aclImdb'):
with tarfile.open(target, 'r:gz') as tar:
tar.extractall()
###Output
_____no_output_____
###Markdown
Preprocessing the movie dataset into more convenient format
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
basepath = 'aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in sorted(os.listdir(path)):
with open(os.path.join(path, file),
'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]],
ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% [##############################] 100% | ETA: 00:00:00
Total time elapsed: 00:04:19
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('movie_data.csv', index=False, encoding='utf-8')
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
df.shape
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv`, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book-2nd-edition/tree/master/code/ch08/ Introducing the bag-of-words model ... Transforming documents into feature vectors By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
###Code
print(count.vocabulary_)
###Output
{'the': 6, 'sun': 4, 'is': 1, 'shining': 3, 'weather': 8, 'sweet': 5, 'and': 0, 'one': 2, 'two': 7}
###Markdown
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$Here the tf(t, d) is the term frequency that we introduced in the previous section,and the inverse document frequency *idf(t, d)* can be calculated as:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$where $n_d$ is the total number of documents, and *df(d, t)* is the number of documents *d* that contain the term *t*. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,
norm='l2',
smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs))
.toarray())
###Output
[[ 0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[ 0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[ 0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are: $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$To make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text)
text = (re.sub('[\W]+', ' ', text.lower()) +
' '.join(emoticons).replace('-', ''))
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
###Output
_____no_output_____
###Markdown
**Important Note about `n_jobs`**Please note that it is highly recommended to use `n_jobs=-1` (instead of `n_jobs=1`) in the previous code example to utilize all available cores on your machine and speed up the grid search. However, some Windows users reported issues when running the previous code with the `n_jobs=-1` setting related to pickling the tokenizer and tokenizer_porter functions for multiprocessing on Windows. Another workaround would be to replace those two functions, `[tokenizer, tokenizer_porter]`, with `[str.split]`. However, note that the replacement by the simple `str.split` would not support stemming. **Important Note about the running time**Executing the following code cell **may take up to 30-60 min** depending on your machine, since based on the parameter grid we defined, there are 2*2*2*3*5 + 2*2*2*3*5 = 240 models to fit.If you do not wish to wait so long, you could reduce the size of the dataset by decreasing the number of training samples, for example, as follows: X_train = df.loc[:2500, 'review'].values y_train = df.loc[:2500, 'sentiment'].values However, note that decreasing the training set size to such a small number will likely result in poorly performing models. Alternatively, you can delete parameters from the grid above to reduce the number of models to fit -- for example, by using the following: param_grid = [{'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0]}, ]
###Code
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to generate more
## "logging" output when this notebook is run
## on the Travis Continuous Integration
## platform to test the code as well as
## speeding up the run using a smaller
## dataset for debugging
if 'TRAVIS' in os.environ:
gs_lr_tfidf.verbose=2
X_train = df.loc[:250, 'review'].values
y_train = df.loc[:250, 'sentiment'].values
X_test = df.loc[25000:25250, 'review'].values
y_test = df.loc[25000:25250, 'sentiment'].values
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.899
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.linear_model import LogisticRegression
import numpy as np
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.400000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.200000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
# This cell is not contained in the book but
# added for convenience so that the notebook
# can be executed starting here, without
# executing prior code in this notebook
import os
import gzip
if not os.path.isfile('movie_data.csv'):
if not os.path.isfile('movie_data.csv.gz'):
print('Please place a copy of the movie_data.csv.gz'
'in this directory. You can obtain it by'
'a) executing the code in the beginning of this'
'notebook or b) by downloading it from GitHub:'
'https://github.com/rasbt/python-machine-learning-'
'book-2nd-edition/blob/master/code/ch08/movie_data.csv.gz')
else:
with in_f = gzip.open('movie_data.csv.gz', 'rb'), \
out_f = open('movie_data.csv', 'wb'):
out_f.write(in_f.read())
import numpy as np
import re
from nltk.corpus import stopwords
# The `stop` is defined as earlier in this chapter
# Added it here for convenience, so that this section
# can be run as standalone without executing prior code
# in the directory
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
###Output
_____no_output_____
###Markdown
**Note**- You can replace `Perceptron(n_iter, ...)` by `Perceptron(max_iter, ...)` in scikit-learn >= 0.19.
###Code
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
if Version(sklearn_version) < '0.18':
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
else:
clf = SGDClassifier(loss='log', random_state=1, max_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
Topic modeling Decomposing text documents with Latent Dirichlet Allocation Latent Dirichlet Allocation with scikit-learn
###Code
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to create a smaller dataset if
## the notebook is run on the Travis Continuous Integration
## platform to test the code on a smaller dataset
## to prevent timeout errors and just serves a debugging tool
## for this notebook
if 'TRAVIS' in os.environ:
df.loc[:500].to_csv('movie_data.csv')
df = pd.read_csv('movie_data.csv', nrows=500)
print('SMALL DATA SUBSET CREATED FOR TESTING')
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=.1,
max_features=5000)
X = count.fit_transform(df['review'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_topics=10,
random_state=123,
learning_method='batch')
X_topics = lda.fit_transform(X)
lda.components_.shape
n_top_words = 5
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print("Topic %d:" % (topic_idx + 1))
print(" ".join([feature_names[i]
for i in topic.argsort()\
[:-n_top_words - 1:-1]]))
###Output
Topic 1:
worst minutes awful script stupid
Topic 2:
family mother father children girl
Topic 3:
american war dvd music tv
Topic 4:
human audience cinema art sense
Topic 5:
police guy car dead murder
Topic 6:
horror house sex girl woman
Topic 7:
role performance comedy actor performances
Topic 8:
series episode war episodes tv
Topic 9:
book version original read novel
Topic 10:
action fight guy guys cool
###Markdown
Based on reading the 5 most important words for each topic, we may guess that the LDA identified the following topics: 1. Generally bad movies (not really a topic category)2. Movies about families3. War movies4. Art movies5. Crime movies6. Horror movies7. Comedies8. Movies somehow related to TV shows9. Movies based on books10. Action movies To confirm that the categories make sense based on the reviews, let's plot 5 movies from the horror movie category (category 6 at index position 5):
###Code
horror = X_topics[:, 5].argsort()[::-1]
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\nHorror movie #%d:' % (iter_idx + 1))
print(df['review'][movie_idx][:300], '...')
###Output
Horror movie #1:
House of Dracula works from the same basic premise as House of Frankenstein from the year before; namely that Universal's three most famous monsters; Dracula, Frankenstein's Monster and The Wolf Man are appearing in the movie together. Naturally, the film is rather messy therefore, but the fact that ...
Horror movie #2:
Okay, what the hell kind of TRASH have I been watching now? "The Witches' Mountain" has got to be one of the most incoherent and insane Spanish exploitation flicks ever and yet, at the same time, it's also strangely compelling. There's absolutely nothing that makes sense here and I even doubt there ...
Horror movie #3:
<br /><br />Horror movie time, Japanese style. Uzumaki/Spiral was a total freakfest from start to finish. A fun freakfest at that, but at times it was a tad too reliant on kitsch rather than the horror. The story is difficult to summarize succinctly: a carefree, normal teenage girl starts coming fac ...
###Markdown
Using the preceeding code example, we printed the first 300 characters from the top 3 horror movies and indeed, we can see that the reviews -- even though we don't know which exact movie they belong to -- sound like reviews of horror movies, indeed. (However, one might argue that movie 2 could also belong to topic category 1.) Summary ... ---Readers may ignore the next cell.
###Code
! python ../.convert_notebook_to_script.py --input ch08.ipynb --output ch08.py
###Output
[NbConvertApp] Converting notebook ch08.ipynb to script
[NbConvertApp] Writing 11500 bytes to ch08.txt
###Markdown
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)https://github.com/1iyiwei/pyml[MIT License](https://github.com/1iyiwei/pyml/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment AnalysisLet's apply what we have learned so far for a real case study.Many people express opinions on the internet and social media sites.Such opinions are a rich source of information for many applications:* business* politics* scienceApply natural language processing (NLP), in particular sentiment analysis, over movie reviews ChallengesWritten opinions/reviews have varying lengths* cannot be treated as fixed-dimension inputsNot all Raw texts suitable for direct machine learning * need clean upHow to pick and train a machine learning modelHandle large datasets* potentially out-of-core TopicsData-preprocessing* cleaning and preparing text data from movie reviews* building (fixed-dimension) feature vectors from (variable-dimension) text documentsTraining a machine learning model to classify positive and negative movie reviewsWorking with large text datasets using out-of-core learning Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a '' -u -d -v -p numpy,pandas,matplotlib,sklearn,nltk
###Output
last updated: 2016-11-27
CPython 3.5.2
IPython 4.2.0
numpy 1.11.1
pandas 0.18.1
matplotlib 1.5.1
sklearn 0.18
nltk 3.2.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview- [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Summary](Summary) Obtaining the IMDb movie review datasetThe [IMDB movie](http://www.imdb.com/) review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).* also available under [../datasets/movie/](../datasets/movie/) as part of the gibhub repo50,000 movie reviews, manually labeled as being positive or negative for classification.
###Code
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv` file in the previous chapter, you can find a download a zip archive at https://github.com/1iyiwei/pyml/tree/master/code/datasets/movie
###Code
import urllib.request
import os
# the file we eventually need to access
csv_filename = 'movie_data.csv'
# a global variable to select data source: local or remote
data_source = 'local'
if data_source == 'local':
#basepath = '/Users/Sebastian/Desktop/aclImdb/'
basepath = '../datasets/movie/'
zip_filename = 'movie_data.csv.zip'
else: # remote
url = 'http://ai.stanford.edu/~amaas/data/sentiment/'
basepath = '.'
zip_filename = 'aclImdb_v1.tar.gz'
remote_file = os.path.join(url, zip_filename)
local_file = os.path.join(basepath, zip_filename)
csv_file = os.path.join(basepath, csv_filename)
if not os.path.isfile(csv_file) and not os.path.isfile(local_file):
urllib.request.urlretrieve(remote_file, local_file)
###Output
_____no_output_____
###Markdown
DecompressingAfter downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`or`tar -xvzf aclImdb_v1.tar.gz`for the verbose modeB) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive.C) The code below decompresses directly via Python.
###Code
# The code below decompresses directly via Python.
import os
import zipfile
import tarfile
# change the `basepath` to the directory of the
# unzipped movie dataset
csv_file = os.path.join(basepath, csv_filename)
zip_file = os.path.join(basepath, zip_filename)
if not os.path.isfile(csv_file):
if tarfile.is_tarfile(zip_file):
tartar = tarfile.open(zip_file, "r")
#with tarfile.TarFile(zip_file, "r") as tartar:
tartar.extractall(basepath)
tartar.close()
else:
with zipfile.ZipFile(zip_file, "r") as zipper:
zipper.extractall(basepath)
zipper.close()
###Output
_____no_output_____
###Markdown
Reading the datasetThe decompressed file is in csv format, we can read it via panda as usual.PyPrind (Python Progress Indicator)* useful for visualizing progress for processing large datasets* pip install pyprind Compatibility Note:I received an email from a reader who was having troubles with reading the movie review texts due to encoding issues. Typically, Python's default encoding is set to `'utf-8'`, which shouldn't cause troubles when running this IPython notebook. You can simply check the encoding on your machine by firing up a new Python interpreter from the command line terminal and execute >>> import sys >>> sys.getdefaultencoding() If the returned result is **not** `'utf-8'`, you probably need to change your Python's encoding to `'utf-8'`, for example by typing `export PYTHONIOENCODING=utf8` in your terminal shell prior to running this IPython notebook. (Note that this is a temporary change, and it needs to be executed in the same shell that you'll use to launch `ipython notebook`.Alternatively, you can replace the lines with open(os.path.join(path, file), 'r') as infile: ... pd.read_csv('./movie_data.csv') ... df.to_csv('./movie_data.csv', index=False)by with open(os.path.join(path, file), 'r', encoding='utf-8') as infile: ... pd.read_csv('./movie_data.csv', encoding='utf-8') ... df.to_csv('./movie_data.csv', index=False, encoding='utf-8') in the following cells to achieve the desired effect.
###Code
import pyprind
import pandas as pd
import os
db_path = 'aclImdb';
if not os.path.isfile(csv_file):
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(db_path, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
if not os.path.isfile(csv_file):
df.to_csv(os.path.join(basepath, csv_filename), index=False, encoding='utf-8')
###Output
_____no_output_____
###Markdown
Read back the data-frame from file, local or remote.
###Code
import pandas as pd
df = pd.read_csv(os.path.join(basepath, csv_filename), encoding='utf-8')
###Output
_____no_output_____
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
# first few entries
df.head(3)
# a complete review
print(df.values[0])
###Output
[ 'Election is a Chinese mob movie, or triads in this case. Every two years an election is held to decide on a new leader, and at first it seems a toss up between Big D (Tony Leung Ka Fai, or as I know him, "The Other Tony Leung") and Lok (Simon Yam, who was Judge in Full Contact!). Though once Lok wins, Big D refuses to accept the choice and goes to whatever lengths he can to secure recognition as the new leader. Unlike any other Asian film I watch featuring gangsters, this one is not an action movie. It has its bloody moments, when necessary, as in Goodfellas, but it\'s basically just a really effective drama. There are a lot of characters, which is really hard to keep track of, but I think that plays into the craziness of it all a bit. A 100-year-old baton, which is the symbol of power I mentioned before, changes hands several times before things settle down. And though it may appear that the film ends at the 65 or 70-minute mark, there are still a couple big surprises waiting. Simon Yam was my favorite character here and sort of anchors the picture.<br /><br />Election was quite the award winner at last year\'s Hong Kong Film Awards, winning for best actor (Tony Leung), best picture, best director (Johnny To, who did Heroic Trio!!), and best screenplay. It also had nominations for cinematography, editing, film score (which I loved), and three more acting performances (including Yam).'
1]
###Markdown
Introducing the bag-of-words modelMovie reviews vary in lengths* cannot use them directly as inputs for models that expect fixed dimension inputsWe need to convert the dataset into numerical form* e.g. categorical variables (nominal or ordinal) into numerical variablesBag-of-words: represent text as numerical feature vectors* create a vocabulary of unique tokens, e.g. words* compute a histogram counting the number of occurances of each wordThe feature vector would be sparse since most of the entries are $0$ Transforming documents into feature vectorsBy calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
# fixed-dimension features we can use for machine learning
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
Print the contents of the vocabulary to get a better understanding of the underlying concepts:
###Code
# the dictionary trained from the document data
print(count.vocabulary_)
###Output
{'is': 1, 'one': 2, 'shining': 3, 'two': 7, 'and': 0, 'sweet': 5, 'sun': 4, 'the': 6, 'weather': 8}
###Markdown
The vocabulary is stored in a Python dictionary* key: words* value: integer indices
###Code
# convert from sparse dictionary to dense array
# fixed dimension feature
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the 1st feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*. N-gramN contiguous sequence of items1-gram: individual words* e.g. the, sun, is, shining2-gram: pairs of adjacent words* e.g. the sun, sun is, is shiningCountVectorizer can work with n-gram via the ngram_range parameter. Assessing word relevancy via term frequency-inverse document frequency Term-frequency (tf) alone is not enough.* common words typically don't contain useful or discriminatory information.* e.g., the, is, and ...Also consider inverse document frequency (idf)* downweight those frequently occurring words in the feature vectors. The tf-idf can be defined as the product of the term frequency and the inverse document frequency:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$Here the tf(t, d) is the term frequency introduced above,and the inverse document frequency $idf(t, d)$ can be calculated as:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)}$$* $n_d$ is the total number of documents* $df(d, t)$ is the number of documents $d$ that contain the term $t$. $idf$ gives higher weights to rarer words. Note* adding the constant 1 to the denominator to avoid division-by-zero.* the log is used to ensure that low document frequencies are not given too much weight. Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
###Code
np.set_printoptions(precision=2)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
###Output
[[ 0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[ 0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[ 0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
As we saw in the previous subsection, the word 'is' had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. Scikit-learn tf-idfHowever, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we defined earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are:$$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$ While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, whichreturns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$ ExampleTo make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word `is` has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously.The final step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below).
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True) # notice norm is None not l2
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1] # for the last document
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text dataThe text may contain stuff irrelevant for sentiment analysis.* html tags, punctuation, non-letter characters, etc.
###Code
df.loc[0, 'review'][-50:]
###Output
_____no_output_____
###Markdown
Use regular expression for cleaning text dataReferences:* https://developers.google.com/edu/python/regular-expressions* https://docs.python.org/3.4/library/re.html Remove all punctuations except for emoticons which convey sentiments.
###Code
import re
def preprocessor(text):
# [] for set of characters, ^ inside [] means invert, i.e. not > below
# * means 0 or more occurances of the pattern
text = re.sub('<[^>]*>', '', text) # remove html tags between pairs of < and >
# () for group, subpart of the whole pattern we look for
# findall will return tuples each containing groups
# (?:) means not returing the group result for findall
# | means or, \ for escape sequence
# first group eye : or ; or =
# second group nose - 0 or 1 time via ?
# third group mouth ) or ( or D or P
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
# matching examples:
# :-)
# =D
# convert to lower case as upper/lower case doesn't matter for sentiment
# replace all non-word characters by space
# \w: letters, digits, _
# \W: the complement set
text = re.sub('[\W]+', ' ', text.lower())
# add back emoticons, though in different orders
# and without nose "-", e.g. :) and :-) are considered the same
text = text + ' '.join(emoticons).replace('-', '')
return text
###Output
_____no_output_____
###Markdown
Example results
###Code
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
###Output
_____no_output_____
###Markdown
Emotions are moved to the end; ordering doesn't matter for 1-gram analysis. Cleanup the data
###Code
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokensSplit an entity into constituting components, e.g. words for documents.Stemming: transform a word into root form.* e.g. running $\rightarrow$ run* see http://www.nltk.org/book/ for more details and options for stemming.
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
# split along white spaces
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
###Output
_____no_output_____
###Markdown
Remove stop-wordsStop-words are extremely common in all texts* e.g. is, and, has, etc.Remove them completely can help document analysis.
###Code
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classificationLet's try to apply logistic regression to classify the movie reviews.Use cleaned-up documents (no html tags or punctuations except for emoticons), but leave tokenization as hyper-parameters. Split into training and test datasets
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
# Use a smaller subset if it took too long to run the full datasets above
train_subset_size = 2500
test_subset_size = 2500
#print(X_train.shape)
if train_subset_size > 0:
X_train = X_train[:train_subset_size]
y_train = y_train[:train_subset_size]
if test_subset_size > 0:
X_test = X_test[:test_subset_size]
y_test = y_test[:test_subset_size]
#print(X_train.shape)
###Output
_____no_output_____
###Markdown
Grid-search hyper-parametersTwo grid sets: with and without idfDifferent regularization strengths via $C$.Use pipeline as before.
###Code
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
###Output
Best parameter set: {'clf__penalty': 'l2', 'vect__stop_words': ['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', 'her', 'hers', 'herself', 'it', 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', 'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'don', 'should', 'now', 'd', 'll', 'm', 'o', 're', 've', 'y', 'ain', 'aren', 'couldn', 'didn', 'doesn', 'hadn', 'hasn', 'haven', 'isn', 'ma', 'mightn', 'mustn', 'needn', 'shan', 'shouldn', 'wasn', 'weren', 'won', 'wouldn'], 'vect__ngram_range': (1, 1), 'vect__tokenizer': <function tokenizer at 0x0000029516329950>, 'clf__C': 100.0}
CV Accuracy: 0.850
###Markdown
The CV accuracy and test accuracy would be a bit lower if we use a subset of all data, but are still reasonable.
###Code
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.843
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.linear_model import LogisticRegression
import numpy as np
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import cross_val_score
else:
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
if Version(sklearn_version) < '0.18':
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
else:
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.400000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.200000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Naive BayesPopular for text classification, e.g. spam filtering.* easy to implement* fast to compute* good performance with small datasetsSee http://sebastianraschka.com/Articles/2014_naive_bayes_1.html for more details. Working with bigger data - online algorithms and out-of-core learningThe grid-search in the previous section is quite computationally expensive.But real world datasets can be much larger!Out-of-core learning can help us deal with large datasets without super-computers.SGDClassifier: stochastic gradient descent classifier
###Code
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path=csv_file))
###Output
_____no_output_____
###Markdown
Python generatorshttp://stackoverflow.com/questions/231767/what-does-the-yield-keyword-do-in-python
###Code
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
###Output
_____no_output_____
###Markdown
Out-of-core VectorizerCountVectorizer holds complete vocabulary in memoryTfidfVectorizer keeps all training data in memory[HashVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.HashingVectorizer.html) comes for rescue* hash words into histogram bins* can have collision, but with low probability* collision reduces histogram resolution, but still suffices for classification and can reduce number of features and thus over-fitting HashA function that maps items into cells in a hash table.* easy/fast to compute* can have collision, i.e. different items map into the same hash entry* try to minimize and/or handle collision
###Code
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21, # large enough to minimize has collision
preprocessor=None,
tokenizer=tokenizer)
# logistic regression for loss
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path=csv_file)
###Output
_____no_output_____
###Markdown
Start out-of-core learningTraining: 45,000 samplesTest: 5,000 samples
###Code
# full size
num_batches = 45
batch_size = 1000
test_size = 5000
# subset if the fullset took too long to run
batch_size = 100
test_size = 500
import pyprind
pbar = pyprind.ProgBar(num_batches)
classes = np.array([0, 1])
for _ in range(num_batches):
X_train, y_train = get_minibatch(doc_stream, size=batch_size)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=test_size)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)https://github.com/rasbt/python-machine-learning-book[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn,nltk
###Output
Sebastian Raschka
last updated: 2016-06-05
CPython 3.5.1
IPython 4.2.0
numpy 1.11.0
pandas 0.18.0
matplotlib 1.5.1
scikit-learn 0.17.1
nltk 3.2.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Summary](Summary) Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. Compatibility Note:I received an email from a reader who was having troubles with reading the movie review texts due to encoding issues. Typically, Python's default encoding is set to `'utf-8'`, which shouldn't cause troubles when running this IPython notebook. You can simply check the encoding on your machine by firing up a new Python interpreter from the command line terminal and execute >>> import sys >>> sys.getdefaultencoding() If the returned result is **not** `'utf-8'`, you probably need to change your Python's encoding to `'utf-8'`, for example by typing `export PYTHONIOENCODING=utf8` in your terminal shell prior to running this IPython notebook. (Note that this is a temporary change, and it needs to be executed in the same shell that you'll use to launch `ipython notebook`.Alternatively, you can replace the lines with open(os.path.join(path, file), 'r') as infile: ... pd.read_csv('./movie_data.csv') ... df.to_csv('./movie_data.csv', index=False)by with open(os.path.join(path, file), 'r', encoding='utf-8') as infile: ... pd.read_csv('./movie_data.csv', encoding='utf-8') ... df.to_csv('./movie_data.csv', index=False, encoding='utf-8') in the following cells to achieve the desired effect.
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
#basepath = '/Users/Sebastian/Desktop/aclImdb/'
basepath = './aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% 100%
[##############################] | ETA: 00:00:00
Total time elapsed: 00:06:23
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('./movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(3)
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv` file in the previous chapter, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book/tree/master/code/datasets/movie Introducing the bag-of-words model ... Transforming documents into feature vectors By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
###Code
print(count.vocabulary_)
###Output
{'sun': 4, 'and': 0, 'is': 1, 'the': 6, 'shining': 3, 'two': 7, 'sweet': 5, 'weather': 8, 'one': 2}
###Markdown
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$Here the tf(t, d) is the term frequency that we introduced in the previous section,and the inverse document frequency *idf(t, d)* can be calculated as:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$where $n_d$ is the total number of documents, and *df(d, t)* is the number of documents *d* that contain the term *t*. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
###Output
[[ 0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[ 0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[ 0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are: $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$To make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.899
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.cross_validation import StratifiedKFold, cross_val_score
from sklearn.linear_model import LogisticRegression
import numpy as np
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
from sklearn.grid_search import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.400000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.200000 - 0.0s
[CV] ................................................................
[CV] ....................................... , score=0.600000 - 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
*Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-editionCode License: [MIT License](https://github.com/rasbt/python-machine-learning-book-2nd-edition/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 8 - Applying Machine Learning To Sentiment Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a "Sebastian Raschka" -u -d -v -p numpy,pandas,sklearn,nltk
###Output
Sebastian Raschka
last updated: 2017-09-02
CPython 3.6.1
IPython 6.1.0
numpy 1.12.1
pandas 0.20.3
sklearn 0.19.0
nltk 3.2.4
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Preparing the IMDb movie review data for text processing](Preparing-the-IMDb-movie-review-data-for-text-processing) - [Obtaining the IMDb movie review dataset](Obtaining-the-IMDb-movie-review-dataset) - [Preprocessing the movie dataset into more convenient format](Preprocessing-the-movie-dataset-into-more-convenient-format)- [Introducing the bag-of-words model](Introducing-the-bag-of-words-model) - [Transforming words into feature vectors](Transforming-words-into-feature-vectors) - [Assessing word relevancy via term frequency-inverse document frequency](Assessing-word-relevancy-via-term-frequency-inverse-document-frequency) - [Cleaning text data](Cleaning-text-data) - [Processing documents into tokens](Processing-documents-into-tokens)- [Training a logistic regression model for document classification](Training-a-logistic-regression-model-for-document-classification)- [Working with bigger data – online algorithms and out-of-core learning](Working-with-bigger-data-–-online-algorithms-and-out-of-core-learning)- [Topic modeling](Topic-modeling) - [Decomposing text documents with Latent Dirichlet Allocation](Decomposing-text-documents-with-Latent-Dirichlet-Allocation) - [Latent Dirichlet Allocation with scikit-learn](Latent-Dirichlet-Allocation-with-scikit-learn)- [Summary](Summary) Preparing the IMDb movie review data for text processing Obtaining the IMDb movie review dataset The IMDB movie review set can be downloaded from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).After downloading the dataset, decompress the files.A) If you are working with Linux or MacOS X, open a new terminal windowm `cd` into the download directory and execute `tar -zxf aclImdb_v1.tar.gz`B) If you are working with Windows, download an archiver such as [7Zip](http://www.7-zip.org) to extract the files from the download archive. **Optional code to download and unzip the dataset via Python:**
###Code
import os
import sys
import tarfile
import time
source = 'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
target = 'aclImdb_v1.tar.gz'
def reporthook(count, block_size, total_size):
global start_time
if count == 0:
start_time = time.time()
return
duration = time.time() - start_time
progress_size = int(count * block_size)
speed = progress_size / (1024.**2 * duration)
percent = count * block_size * 100. / total_size
sys.stdout.write("\r%d%% | %d MB | %.2f MB/s | %d sec elapsed" %
(percent, progress_size / (1024.**2), speed, duration))
sys.stdout.flush()
if not os.path.isdir('aclImdb') and not os.path.isfile('aclImdb_v1.tar.gz'):
if (sys.version_info < (3, 0)):
import urllib
urllib.urlretrieve(source, target, reporthook)
else:
import urllib.request
urllib.request.urlretrieve(source, target, reporthook)
if not os.path.isdir('aclImdb'):
with tarfile.open(target, 'r:gz') as tar:
tar.extractall()
###Output
_____no_output_____
###Markdown
Preprocessing the movie dataset into more convenient format
###Code
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
basepath = 'aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file),
'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]],
ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% [##############################] 100% | ETA: 00:00:00
Total time elapsed: 00:02:21
###Markdown
Shuffling the DataFrame:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
Optional: Saving the assembled data as CSV file:
###Code
df.to_csv('movie_data.csv', index=False, encoding='utf-8')
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
###Output
_____no_output_____
###Markdown
NoteIf you have problems with creating the `movie_data.csv` file in the previous chapter, you can find a download a zip archive at https://github.com/rasbt/python-machine-learning-book-2nd-edition/tree/master/code/ch08/ Introducing the bag-of-words model ... Transforming documents into feature vectors By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
###Code
print(count.vocabulary_)
###Output
{'the': 6, 'sun': 4, 'is': 1, 'shining': 3, 'weather': 8, 'sweet': 5, 'and': 0, 'one': 2, 'two': 7}
###Markdown
As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
Assessing word relevancy via term frequency-inverse document frequency
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$Here the tf(t, d) is the term frequency that we introduced in the previous section,and the inverse document frequency *idf(t, d)* can be calculated as:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$where $n_d$ is the total number of documents, and *df(d, t)* is the number of documents *d* that contain the term *t*. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.Scikit-learn implements yet another transformer, the `TfidfTransformer`, that takes the raw term frequencies from `CountVectorizer` as input and transforms them into tf-idfs:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,
norm='l2',
smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs))
.toarray())
###Output
[[ 0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[ 0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[ 0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is isnow associated with a relatively small tf-idf (0.45) in document 3 since it isalso contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information. However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the `TfidfTransformer` calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are: $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$The tf-idf equation that was implemented in scikit-learn is as follows:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the `TfidfTransformer` normalizes the tf-idfs directly.By default (`norm='l2'`), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector *v* by its L2-norm:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$To make sure that we understand how TfidfTransformer works, let us walkthrough an example and calculate the tf-idf of the word is in the 3rd document.The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ As we can see, the results match the results returned by scikit-learn's `TfidfTransformer` (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
Cleaning text data
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text)
text = (re.sub('[\W]+', ' ', text.lower()) +
' '.join(emoticons).replace('-', ''))
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
###Output
_____no_output_____
###Markdown
Processing documents into tokens
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
Training a logistic regression model for document classification Strip HTML and punctuation to speed up the GridSearch later:
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
###Output
_____no_output_____
###Markdown
**Important Note about `n_jobs`**Please note that it is highly recommended to use `n_jobs=-1` (instead of `n_jobs=1`) in the previous code example to utilize all available cores on your machine and speed up the grid search. However, some Windows users reported issues when running the previous code with the `n_jobs=-1` setting related to pickling the tokenizer and tokenizer_porter functions for multiprocessing on Windows. Another workaround would be to replace those two functions, `[tokenizer, tokenizer_porter]`, with `[str.split]`. However, note that the replacement by the simple `str.split` would not support stemming. **Important Note about the running time**Executing the following code cell **may take up to 30-60 min** depending on your machine, since based on the parameter grid we defined, there are 2*2*2*3*5 + 2*2*2*3*5 = 240 models to fit.If you do not wish to wait so long, you could reduce the size of the dataset by decreasing the number of training samples, for example, as follows: X_train = df.loc[:2500, 'review'].values y_train = df.loc[:2500, 'sentiment'].values However, note that decreasing the training set size to such a small number will likely result in poorly performing models. Alternatively, you can delete parameters from the grid above to reduce the number of models to fit -- for example, by using the following: param_grid = [{'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0]}, ]
###Code
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to generate more
## "logging" output when this notebook is run
## on the Travis Continuous Integration
## platform to test the code as well as
## speeding up the run using a smaller
## dataset for debugging
if 'TRAVIS' in os.environ:
gs_lr_tfidf.verbose=2
X_train = df.loc[:250, 'review'].values
y_train = df.loc[:250, 'sentiment'].values
X_test = df.loc[25000:25250, 'review'].values
y_test = df.loc[25000:25250, 'sentiment'].values
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test Accuracy: 0.899
###Markdown
Start comment: Please note that `gs_lr_tfidf.best_score_` is the average k-fold cross-validation score. I.e., if we have a `GridSearchCV` object with 5-fold cross-validation (like the one above), the `best_score_` attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
###Code
from sklearn.linear_model import LogisticRegression
import numpy as np
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
###Output
_____no_output_____
###Markdown
By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (`cv3_idx`) to the `cross_val_score` scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds. Next, let us use the `GridSearchCV` object and feed it the same 5 cross-validation sets (via the pre-generated `cv3_idx` indices):
###Code
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.400000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.200000, total= 0.0s
[CV] ................................................................
[CV] ................................. , score=0.600000, total= 0.0s
###Markdown
As we can see, the scores for the 5 folds are exactly the same as the ones from `cross_val_score` earlier. Now, the best_score_ attribute of the `GridSearchCV` object, which becomes available after `fit`ting, returns the average accuracy score of the best model:
###Code
gs.best_score_
###Output
_____no_output_____
###Markdown
As we can see, the result above is consistent with the average score computed the `cross_val_score`.
###Code
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
###Output
_____no_output_____
###Markdown
End comment. Working with bigger data - online algorithms and out-of-core learning
###Code
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
###Output
/Users/sebastian/miniconda3/lib/python3.6/site-packages/sklearn/linear_model/stochastic_gradient.py:73: DeprecationWarning: n_iter parameter is deprecated in 0.19 and will be removed in 0.21. Use max_iter and tol instead.
DeprecationWarning)
###Markdown
**Note**- You can replace `Perceptron(n_iter, ...)` by `Perceptron(max_iter, ...)` in scikit-learn >= 0.19. The `n_iter` parameter is used here deriberately, because some people still use scikit-learn 0.18.
###Code
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
Topic modeling Decomposing text documents with Latent Dirichlet Allocation Latent Dirichlet Allocation with scikit-learn
###Code
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to create a smaller dataset if
## the notebook is run on the Travis Continuous Integration
## platform to test the code on a smaller dataset
## to prevent timeout errors and just serves a debugging tool
## for this notebook
if 'TRAVIS' in os.environ:
df.loc[:500].to_csv('movie_data.csv')
df = pd.read_csv('movie_data.csv', nrows=500)
print('SMALL DATA SUBSET CREATED FOR TESTING')
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=.1,
max_features=5000)
X = count.fit_transform(df['review'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_topics=10,
random_state=123,
learning_method='batch')
X_topics = lda.fit_transform(X)
lda.components_.shape
n_top_words = 5
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print("Topic %d:" % (topic_idx + 1))
print(" ".join([feature_names[i]
for i in topic.argsort()\
[:-n_top_words - 1:-1]]))
###Output
Topic 1:
worst minutes awful script stupid
Topic 2:
family mother father children girl
Topic 3:
american war dvd music tv
Topic 4:
human audience cinema art sense
Topic 5:
police guy car dead murder
Topic 6:
horror house sex girl woman
Topic 7:
role performance comedy actor performances
Topic 8:
series episode war episodes tv
Topic 9:
book version original read novel
Topic 10:
action fight guy guys cool
###Markdown
Based on reading the 5 most important words for each topic, we may guess that the LDA identified the following topics: 1. Generally bad movies (not really a topic category)2. Movies about families3. War movies4. Art movies5. Crime movies6. Horror movies7. Comedies8. Movies somehow related to TV shows9. Movies based on books10. Action movies To confirm that the categories make sense based on the reviews, let's plot 5 movies from the horror movie category (category 6 at index position 5):
###Code
horror = X_topics[:, 5].argsort()[::-1]
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\nHorror movie #%d:' % (iter_idx + 1))
print(df['review'][movie_idx][:300], '...')
###Output
Horror movie #1:
House of Dracula works from the same basic premise as House of Frankenstein from the year before; namely that Universal's three most famous monsters; Dracula, Frankenstein's Monster and The Wolf Man are appearing in the movie together. Naturally, the film is rather messy therefore, but the fact that ...
Horror movie #2:
Okay, what the hell kind of TRASH have I been watching now? "The Witches' Mountain" has got to be one of the most incoherent and insane Spanish exploitation flicks ever and yet, at the same time, it's also strangely compelling. There's absolutely nothing that makes sense here and I even doubt there ...
Horror movie #3:
<br /><br />Horror movie time, Japanese style. Uzumaki/Spiral was a total freakfest from start to finish. A fun freakfest at that, but at times it was a tad too reliant on kitsch rather than the horror. The story is difficult to summarize succinctly: a carefree, normal teenage girl starts coming fac ...
###Markdown
Using the preceeding code example, we printed the first 300 characters from the top 3 horror movies and indeed, we can see that the reviews -- even though we don't know which exact movie they belong to -- sound like reviews of horror movies, indeed. (However, one might argue that movie 2 could also belong to topic category 1.) Summary ... ---Readers may ignore the next cell.
###Code
! python ../.convert_notebook_to_script.py --input ch08.ipynb --output ch08.py
###Output
[NbConvertApp] Converting notebook ch08.ipynb to script
[NbConvertApp] Writing 24627 bytes to ch08.py
###Markdown
8장. 감성 분석에 머신 러닝 적용하기 **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.** 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기 `watermark`는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다. `watermark` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
###Code
#!pip install watermark
%load_ext watermark
%watermark -u -d -v -p numpy,pandas,sklearn,nltk
###Output
last updated: 2019-05-27
CPython 3.7.3
IPython 7.5.0
numpy 1.16.3
pandas 0.24.2
sklearn 0.21.1
nltk 3.4.1
###Markdown
텍스트 처리용 IMDb 영화 리뷰 데이터 준비 영화 리뷰 데이터셋 구하기 IMDB 영화 리뷰 데이터셋은 [http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz](http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz)에서 내려받을 수 있습니다. 다운로드된 후 파일 압축을 해제합니다.A) 리눅스(Linux)나 macOS를 사용하면 새로운 터미널(Terminal) 윈도우를 열고 `cd` 명령으로 다운로드 디렉터리로 이동하여 다음 명령을 실행하세요. `tar -zxf aclImdb_v1.tar.gz`B) 윈도(Windows)를 사용하면 7Zip(http://www.7-zip.org) 같은 무료 압축 유틸리티를 설치하여 다운로드한 파일의 압축을 풀 수 있습니다. **코랩이나 리눅스에서 직접 다운로드하려면 다음 셀의 주석을 제거하고 실행하세요.**
###Code
#!wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
###Output
_____no_output_____
###Markdown
**다음처럼 파이썬에서 직접 압축을 풀 수도 있습니다:**
###Code
import os
import tarfile
if not os.path.isdir('aclImdb'):
with tarfile.open('aclImdb_v1.tar.gz', 'r:gz') as tar:
tar.extractall()
###Output
_____no_output_____
###Markdown
영화 리뷰 데이터셋을 더 간편한 형태로 전처리하기 `pyprind`는 주피터 노트북에서 진행바를 출력하기 위한 유틸리티입니다. `pyprind` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
###Code
#!pip install pyprind
import pyprind
import pandas as pd
import os
# `basepath`를 압축 해제된 영화 리뷰 데이터셋이 있는
# 디렉토리로 바꾸세요
basepath = 'aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in sorted(os.listdir(path)):
with open(os.path.join(path, file),
'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]],
ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
###Output
0% [##############################] 100% | ETA: 00:00:00
Total time elapsed: 00:01:43
###Markdown
데이터프레임을 섞습니다:
###Code
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
###Output
_____no_output_____
###Markdown
선택사항: 만들어진 데이터를 CSV 파일로 저장합니다:
###Code
df.to_csv('movie_data.csv', index=False, encoding='utf-8')
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
df.shape
###Output
_____no_output_____
###Markdown
BoW 모델 소개 단어를 특성 벡터로 변환하기 CountVectorizer의 fit_transform 메서드를 호출하여 BoW 모델의 어휘사전을 만들고 다음 세 문장을 희소한 특성 벡터로 변환합니다:1. The sun is shining2. The weather is sweet3. The sun is shining, the weather is sweet, and one and one is two
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
###Output
_____no_output_____
###Markdown
어휘 사전의 내용을 출력해 보면 BoW 모델의 개념을 이해하는 데 도움이 됩니다:
###Code
print(count.vocabulary_)
###Output
{'the': 6, 'sun': 4, 'is': 1, 'shining': 3, 'weather': 8, 'sweet': 5, 'and': 0, 'one': 2, 'two': 7}
###Markdown
이전 결과에서 볼 수 있듯이 어휘 사전은 고유 단어와 정수 인덱스가 매핑된 파이썬 딕셔너리에 저장되어 있습니다. 그다음 만들어진 특성 벡터를 출력해 봅시다: 특성 벡터의 각 인덱스는 CountVectorizer의 어휘 사전 딕셔너리에 저장된 정수 값에 해당됩니다. 예를 들어 인덱스 0에 있는 첫 번째 특성은 ‘and’ 단어의 카운트를 의미합니다. 이 단어는 마지막 문서에만 나타나네요. 인덱스 1에 있는 (특성 벡터의 두 번째 열) 단어 ‘is’는 세 문장에 모두 등장합니다. 특성 벡터의 이런 값들을 단어 빈도(term frequency) 라고도 부릅니다. 문서 d에 등장한 단어 t의 횟수를 *tf (t,d)*와 같이 씁니다.
###Code
print(bag.toarray())
###Output
[[0 1 0 1 1 0 1 0 0]
[0 1 0 0 0 1 1 0 1]
[2 3 2 1 1 1 2 1 1]]
###Markdown
tf-idf를 사용해 단어 적합성 평가하기
###Code
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
텍스트 데이터를 분석할 때 클래스 레이블이 다른 문서에 같은 단어들이 나타나는 경우를 종종 보게 됩니다. 일반적으로 자주 등장하는 단어는 유용하거나 판별에 필요한 정보를 가지고 있지 않습니다. 이 절에서 특성 벡터에서 자주 등장하는 단어의 가중치를 낮추는 기법인 tf-idf(term frequency-inverse document frequency)에 대해 배우겠습니다. tf-idf는 단어 빈도와 역문서 빈도(inverse document frequency)의 곱으로 정의됩니다:$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$여기에서 tf(t, d)는 이전 절에서 보았던 단어 빈도입니다. *idf(t, d)*는 역문서 빈도로 다음과 같이 계산합니다:$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$여기에서 $n_d$는 전체 문서 개수이고 *df(d, t)*는 단어 t가 포함된 문서 d의 개수입니다. 분모에 상수 1을 추가하는 것은 선택 사항입니다. 훈련 샘플에 한 번도 등장하지 않는 단어가 있는 경우 분모가 0이 되지 않게 만듭니다. log는 문서 빈도 *df(d, t)*가 낮을 때 역문서 빈도 값이 너무 커지지 않도록 만듭니다.사이킷런 라이브러리에는 `CountVectorizer` 클래스에서 만든 단어 빈도를 입력받아 tf-idf로 변환하는 `TfidfTransformer` 클래스가 구현되어 있습니다:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,
norm='l2',
smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs))
.toarray())
###Output
[[0. 0.43 0. 0.56 0.56 0. 0.43 0. 0. ]
[0. 0.43 0. 0. 0. 0.56 0.43 0. 0.56]
[0.5 0.45 0.5 0.19 0.19 0.19 0.3 0.25 0.19]]
###Markdown
이전 절에서 보았듯이 세 번째 문서에서 단어 ‘is’가 가장 많이 나타났기 때문에 단어 빈도가 가장 컸습니다. 동일한 특성 벡터를 tf-idf로 변환하면 단어 ‘is’는 비교적 작은 tf-idf를 가집니다(0.45). 이 단어는 첫 번째와 두 번째 문서에도 나타나므로 판별에 유용한 정보를 가지고 있지 않을 것입니다. 수동으로 특성 벡터에 있는 각 단어의 tf-idf를 계산해 보면 `TfidfTransformer`가 앞서 정의한 표준 tf-idf 공식과 조금 다르게 계산한다는 것을 알 수 있습니다. 사이킷런에 구현된 역문서 빈도 공식은 다음과 같습니다. $$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$비슷하게 사이킷런에서 계산하는 tf-idf는 앞서 정의한 공식과 조금 다릅니다:$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$일반적으로 tf-idf를 계산하기 전에 단어 빈도(tf)를 정규화하지만 `TfidfTransformer` 클래스는 tf-idf를 직접 정규화합니다. 사이킷런의 `TfidfTransformer`는 기본적으로 L2 정규화를 적용합니다(norm=’l2’). 정규화되지 않은 특성 벡터 v를 L2-노름으로 나누면 길이가 1인 벡터가 반환됩니다:$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$TfidfTransformer의 작동 원리를 이해하기 위해 세 번째 문서에 있는 단어 ‘is'의 tf-idf를 예로 들어 계산해 보죠.세 번째 문서에서 단어 ‘is’의 단어 빈도는 3입니다(tf=3). 이 단어는 세 개 문서에 모두 나타나기 때문에 문서 빈도가 3입니다(df=3). 따라서 역문서 빈도는 다음과 같이 계산됩니다:$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$이제 tf-idf를 계산하기 위해 역문서 빈도에 1을 더하고 단어 빈도를 곱합니다:$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
###Code
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
###Output
tf-idf of term "is" = 3.00
###Markdown
세 번째 문서에 있는 모든 단어에 대해 이런 계산을 반복하면 tf-idf 벡터 [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0, 1.69, 1.29]를 얻습니다. 이 특성 벡터의 값은 앞서 사용했던 TfidfTransformer에서 얻은 값과 다릅니다. tf-idf 계산에서 빠트린 마지막 단계는 다음과 같은 L2-정규화입니다:: $$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$ 결과에서 보듯이 사이킷런의 `TfidfTransformer`에서 반환된 결과와 같아졌습니다. tf-idf 계산 방법을 이해했으므로 다음 절로 넘어가 이 개념을 영화 리뷰 데이터셋에 적용해 보죠.
###Code
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
###Output
_____no_output_____
###Markdown
텍스트 데이터 정제
###Code
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text)
text = (re.sub('[\W]+', ' ', text.lower()) +
' '.join(emoticons).replace('-', ''))
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
df['review'].map(preprocessor)
###Output
_____no_output_____
###Markdown
문서를 토큰으로 나누기
###Code
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
###Output
_____no_output_____
###Markdown
문서 분류를 위한 로지스틱 회귀 모델 훈련하기
###Code
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(solver='liblinear', random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=1)
###Output
_____no_output_____
###Markdown
**`n_jobs` 매개변수에 대하여**앞의 코드 예제에서 컴퓨터에 있는 모든 CPU 코어를 사용해 그리드 서치의 속도를 높이려면 (`n_jobs=1` 대신) `n_jobs=-1`로 지정하는 것이 좋습니다. 일부 시스템에서는 멀티프로세싱을 위해 `n_jobs=-1`로 지정할 때 `tokenizer` 와 `tokenizer_porter` 함수의 직렬화에 문제가 발생할 수 있습니다. 이런 경우 `[tokenizer, tokenizer_porter]`를 `[str.split]`로 바꾸어 문제를 해결할 수 있습니다. 다만 `str.split`로 바꾸면 어간 추출을 하지 못합니다. **코드 실행 시간에 대하여**다음 코드 셀을 실행하면 시스템에 따라 **30~60분 정도 걸릴 수 있습니다**. 매개변수 그리드에서 정의한 대로 2*2*2*3*5 + 2*2*2*3*5 = 240개의 모델을 훈련하기 때문입니다.**코랩을 사용할 경우에도 CPU 코어가 많지 않기 때문에 실행 시간이 오래 걸릴 수 있습니다.**너무 오래 기다리기 어렵다면 데이터셋의 훈련 샘플의 수를 다음처럼 줄일 수 있습니다: X_train = df.loc[:2500, 'review'].values y_train = df.loc[:2500, 'sentiment'].values 훈련 세트 크기를 줄이는 것은 모델 성능을 감소시킵니다. 그리드에 지정한 매개변수를 삭제하면 훈련한 모델 수를 줄일 수 있습니다. 예를 들면 다음과 같습니다: param_grid = [{'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0]}, ]
###Code
gs_lr_tfidf.fit(X_train, y_train)
print('최적의 매개변수 조합: %s ' % gs_lr_tfidf.best_params_)
print('CV 정확도: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('테스트 정확도: %.3f' % clf.score(X_test, y_test))
###Output
테스트 정확도: 0.899
###Markdown
대용량 데이터 처리-온라인 알고리즘과 외부 메모리 학습
###Code
# 이 셀의 코드는 책에 포함되어 있지 않습니다. This cell is not contained in the book but
# 이전 코드를 실행하지 않고 바로 시작할 수 있도록 편의를 위해 추가했습니다.
import os
import gzip
if not os.path.isfile('movie_data.csv'):
if not os.path.isfile('movie_data.csv.gz'):
print('Please place a copy of the movie_data.csv.gz'
'in this directory. You can obtain it by'
'a) executing the code in the beginning of this'
'notebook or b) by downloading it from GitHub:'
'https://github.com/rasbt/python-machine-learning-'
'book-2nd-edition/blob/master/code/ch08/movie_data.csv.gz')
else:
in_f = gzip.open('movie_data.csv.gz', 'rb')
out_f = open('movie_data.csv', 'wb')
out_f.write(in_f.read())
in_f.close()
out_f.close()
import numpy as np
import re
from nltk.corpus import stopwords
# `stop` 객체를 앞에서 정의했지만 이전 코드를 실행하지 않고
# 편의상 여기에서부터 코드를 실행하기 위해 다시 만듭니다.
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # 헤더 넘기기
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
pass
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
clf = SGDClassifier(loss='log', random_state=1, max_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('정확도: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
###Output
_____no_output_____
###Markdown
토픽 모델링 사이킷런의 LDA
###Code
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=.1,
max_features=5000)
X = count.fit_transform(df['review'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_components=10,
random_state=123,
learning_method='batch')
X_topics = lda.fit_transform(X)
lda.components_.shape
n_top_words = 5
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print("토픽 %d:" % (topic_idx + 1))
print(" ".join([feature_names[i]
for i in topic.argsort()\
[:-n_top_words - 1:-1]]))
###Output
토픽 1:
worst minutes awful script stupid
토픽 2:
family mother father children girl
토픽 3:
american war dvd music tv
토픽 4:
human audience cinema art sense
토픽 5:
police guy car dead murder
토픽 6:
horror house sex girl woman
토픽 7:
role performance comedy actor performances
토픽 8:
series episode war episodes tv
토픽 9:
book version original read novel
토픽 10:
action fight guy guys cool
###Markdown
각 토픽에서 가장 중요한 단어 다섯 개를 기반으로 LDA가 다음 토픽을 구별했다고 추측할 수 있습니다.1. 대체적으로 형편없는 영화(실제 토픽 카테고리가 되지 못함)2. 가족 영화3. 전쟁 영화4. 예술 영화5. 범죄 영화6. 공포 영화7. 코미디 영화8. TV 쇼와 관련된 영화9. 소설을 원작으로 한 영화10. 액션 영화 카테고리가 잘 선택됐는지 확인하기 위해 공포 영화 카테고리에서 3개 영화의 리뷰를 출력해 보죠(공포 영화는 카테고리 6이므로 인덱스는 5입니다):
###Code
horror = X_topics[:, 5].argsort()[::-1]
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\n공포 영화 #%d:' % (iter_idx + 1))
print(df['review'][movie_idx][:300], '...')
###Output
공포 영화 #1:
House of Dracula works from the same basic premise as House of Frankenstein from the year before; namely that Universal's three most famous monsters; Dracula, Frankenstein's Monster and The Wolf Man are appearing in the movie together. Naturally, the film is rather messy therefore, but the fact that ...
공포 영화 #2:
Okay, what the hell kind of TRASH have I been watching now? "The Witches' Mountain" has got to be one of the most incoherent and insane Spanish exploitation flicks ever and yet, at the same time, it's also strangely compelling. There's absolutely nothing that makes sense here and I even doubt there ...
공포 영화 #3:
<br /><br />Horror movie time, Japanese style. Uzumaki/Spiral was a total freakfest from start to finish. A fun freakfest at that, but at times it was a tad too reliant on kitsch rather than the horror. The story is difficult to summarize succinctly: a carefree, normal teenage girl starts coming fac ...
|
notebooks/handson_analysis.ipynb | ###Markdown
Hands-on 2: How to create a fMRI analysis workflowThe purpose of this section is that you set up a complete fMRI analysis workflow yourself. So that in the end, you are able to perform the analysis from A-Z, i.e. from preprocessing to group analysis. This section will cover the analysis part, the previous section [Hands-on 1: Preprocessing](handson_preprocessing.ipynb) handles the preprocessing part.We will use this opportunity to show you some nice additional interfaces/nodes that might not be relevant to your usual analysis. But it's always nice to know that they exist. And hopefully, this will encourage you to investigate all other interfaces that Nipype can bring to the tip of your finger.Important: You will not be able to go through this notebook if you haven't preprocessed your subjects first. 1st-level Analysis Workflow StructureIn this notebook, we will create a workflow that performs 1st-level analysis and normalizes the resulting beta weights to the MNI template. In concrete steps this means: 1. Specify 1st-level model parameters 2. Specify 1st-level contrasts 3. Estimate 1st-level contrasts 4. Normalize 1st-level contrasts ImportsIt's always best to have all relevant module imports at the beginning of your script. So let's import what we most certainly need.
###Code
from nilearn import plotting
%matplotlib inline
# Get the Node and Workflow object
from nipype import Node, Workflow
# Specify which SPM to use
from nipype.interfaces.matlab import MatlabCommand
MatlabCommand.set_default_paths('/opt/spm12-r7219/spm12_mcr/spm12')
###Output
_____no_output_____
###Markdown
**Note:** Ideally you would also put the imports of all the interfaces that you use here at the top. But as we will develop the workflow step by step, we can also import the relevant modules as we go. Create Nodes and Workflow connectionsLet's create all the nodes that we need! Make sure to specify all relevant inputs and keep in mind which ones you later on need to connect in your pipeline. Workflow for the 1st-level analysisWe recommend to create the workflow and establish all its connections at a later place in your script. This helps to have everything nicely together. But for this hands-on example, it makes sense to establish the connections between the nodes as we go.And for this, we first need to create a workflow:
###Code
# Create the workflow here
# Hint: use 'base_dir' to specify where to store the working directory
analysis1st = Workflow(name='work_1st', base_dir='/output/')
###Output
_____no_output_____
###Markdown
Specify 1st-level model parameters (stimuli onsets, duration, etc.) The specify the 1st-level model we need the subject-specific onset times and duration of the stimuli. Luckily, as we are working with a BIDS dataset, this information is nicely stored in a `tsv` file:
###Code
import pandas as pd
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
trialinfo
###Output
_____no_output_____
###Markdown
Using pandas is probably the quickest and easiest ways to aggregate stimuli information per condition.
###Code
for group in trialinfo.groupby('trial_type'):
print(group)
print("")
###Output
_____no_output_____
###Markdown
To create a GLM model, Nipype needs an list of `Bunch` objects per session. As we only have one session, our object needs to look as follows: [Bunch(conditions=['Finger', 'Foot', 'Lips'], durations=[[15.0, 15.0, 15.0, 15.0, 15.0], [15.0, 15.0, 15.0, 15.0, 15.0], [15.0, 15.0, 15.0, 15.0, 15.0]], onsets=[[10, 100, 190, 280, 370], [40, 130, 220, 310, 400], [70, 160, 250, 340, 430]] )]For more information see either the [official documnetation](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.algorithms.modelgen.html) or the [nipype_tutorial example](https://miykael.github.io/nipype_tutorial/notebooks/example_1stlevel.htmlSpecify-GLM-Model).So, let's create this Bunch object that we then can use for the GLM model.
###Code
import pandas as pd
from nipype.interfaces.base import Bunch
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
conditions = []
onsets = []
durations = []
for group in trialinfo.groupby('trial_type'):
conditions.append(group[0])
onsets.append(list(group[1].onset -10)) # subtracting 10s due to removing of 4 dummy scans
durations.append(group[1].duration.tolist())
subject_info = [Bunch(conditions=conditions,
onsets=onsets,
durations=durations,
)]
subject_info
###Output
_____no_output_____
###Markdown
Good! Now we can create the node that will create the SPM model. For this we will be using `SpecifySPMModel`. As a reminder the TR of the acquisition is 2.5s and we want to use a high pass filter of 128.
###Code
from nipype.algorithms.modelgen import SpecifySPMModel
# Initiate the SpecifySPMModel node here
modelspec = Node(SpecifySPMModel(concatenate_runs=False,
input_units='secs',
output_units='secs',
time_repetition=2.5,
high_pass_filter_cutoff=128,
subject_info=subject_info),
name="modelspec")
###Output
_____no_output_____
###Markdown
This node will also need some additional inputs, such as the preprocessed functional images, the motion parameters etc. We will specify those once we take care of the workflow data input stream. Specify 1st-level contrastsTo do any GLM analysis, we need to also define the contrasts that we want to investigate. If we recap, we had three different conditions in the **fingerfootlips** task in this dataset:- **finger**- **foot**- **lips**Therefore, we could create the following contrasts (seven T-contrasts and two F-contrasts):
###Code
# Condition names
condition_names = ['Finger', 'Foot', 'Lips']
# Contrasts
cont01 = ['average', 'T', condition_names, [1/3., 1/3., 1/3.]]
cont02 = ['Finger', 'T', condition_names, [1, 0, 0]]
cont03 = ['Foot', 'T', condition_names, [0, 1, 0]]
cont04 = ['Lips', 'T', condition_names, [0, 0, 1]]
cont05 = ['Finger < others','T', condition_names, [-1, 0.5, 0.5]]
cont06 = ['Foot < others', 'T', condition_names, [0.5, -1, 0.5]]
cont07 = ['Lips > others', 'T', condition_names, [-0.5, -0.5, 1]]
cont08 = ['activation', 'F', [cont02, cont03, cont04]]
cont09 = ['differences', 'F', [cont05, cont06, cont07]]
contrast_list = [cont01, cont02, cont03, cont04, cont05, cont06, cont07, cont08, cont09]
###Output
_____no_output_____
###Markdown
Estimate 1st-level contrastsBefore we can estimate the 1st-level contrasts, we first need to create the 1st-level design. Here you can also specify what kind of basis function you want (HRF, FIR, Fourier, etc.), if you want to use time and dispersion derivatives and how you want to model the serial correlation.In this example, I propose that you use an HRF basis function, that we model time derivatives and that we model the serial correlation with AR(1).
###Code
from nipype.interfaces.spm import Level1Design
# Initiate the Level1Design node here
level1design = Node(Level1Design(bases={'hrf': {'derivs': [1, 0]}},
timing_units='secs',
interscan_interval=2.5,
model_serial_correlations='AR(1)'),
name="level1design")
###Output
_____no_output_____
###Markdown
Now that we have the Model Specification and 1st-Level Design node, we can connect them to each other:
###Code
# Connect the two nodes here
analysis1st.connect([(modelspec, level1design, [('session_info',
'session_info')])])
###Output
_____no_output_____
###Markdown
Now we need to estimate the model. I recommend that you'll use a `Classical: 1` method to estimate the model.
###Code
from nipype.interfaces.spm import EstimateModel
# Initiate the EstimateModel node here
level1estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
name="level1estimate")
###Output
_____no_output_____
###Markdown
Now we can connect the 1st-Level Design node with the model estimation node.
###Code
# Connect the two nodes here
analysis1st.connect([(level1design, level1estimate, [('spm_mat_file',
'spm_mat_file')])])
###Output
_____no_output_____
###Markdown
Now that we estimate the model, we can estimate the contrasts. Don't forget to feed the list of contrast we specify above to this node.
###Code
from nipype.interfaces.spm import EstimateContrast
# Initiate the EstimateContrast node here
level1conest = Node(EstimateContrast(contrasts=contrast_list),
name="level1conest")
###Output
_____no_output_____
###Markdown
Now we can connect the model estimation node with the contrast estimation node.
###Code
# Connect the two nodes here
analysis1st.connect([(level1estimate, level1conest, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')])])
###Output
_____no_output_____
###Markdown
Normalize 1st-level contrastsNow that the contrasts were estimated in subject space we can put them into a common reference space by normalizing them to a specific template. In this case, we will be using SPM12's Normalize routine and normalize to the SPM12 tissue probability map `TPM.nii`.At this step, you can also specify the voxel resolution of the output volumes. If you don't specify it, it will normalize to a voxel resolution of 2x2x2mm. As a training exercise, set the voxel resolution to 4x4x4mm.
###Code
from nipype.interfaces.spm import Normalize12
# Location of the template
template = '/opt/spm12-r7219/spm12_mcr/spm12/tpm/TPM.nii'
# Initiate the Normalize12 node here
normalize = Node(Normalize12(jobtype='estwrite',
tpm=template,
write_voxel_sizes=[4, 4, 4]
),
name="normalize")
###Output
_____no_output_____
###Markdown
Now we can connect the estimated contrasts to normalization node.
###Code
# Connect the nodes here
analysis1st.connect([(level1conest, normalize, [('con_images',
'apply_to_files')])
])
###Output
_____no_output_____
###Markdown
Datainput with `SelectFiles` and `iterables` As in the preprocessing hands-on, we will again be using [`SelectFiles`](../../../nipype_tutorial/notebooks/basic_data_input.ipynbSelectFiles) and [`iterables`](../../../nipype_tutorial/notebooks/basic_iteration.ipynb). So, what do we need?From the preprocessing pipeline, we need the functional images, the motion parameters and the list of outliers. Also, for the normalization, we need the subject-specific anatomy.
###Code
# Import the SelectFiles
from nipype import SelectFiles
# String template with {}-based strings
templates = {'anat': '/data/ds000114/sub-{subj_id}/ses-test/anat/sub-{subj_id}_ses-test_T1w.nii.gz',
'func': '/output/datasink_handson/preproc/sub-{subj_id}_detrend.nii.gz',
'mc_param': '/output/datasink_handson/preproc/sub-{subj_id}.par',
'outliers': '/output/datasink_handson/preproc/art.sub-{subj_id}_outliers.txt'
}
# Create SelectFiles node
sf = Node(SelectFiles(templates, sort_filelist=True),
name='selectfiles')
###Output
_____no_output_____
###Markdown
Now we can specify over which subjects the workflow should iterate. As we preprocessed only subjects 1 to 5, we can only them for this analysis.
###Code
# list of subject identifiers
subject_list = ['02', '03', '04', '07', '08', '09']
sf.iterables = [('subj_id', subject_list)]
###Output
_____no_output_____
###Markdown
Gunzip Node SPM12 can accept NIfTI files as input, but online if they are not compressed ('unzipped'). Therefore, we need to use a `Gunzip` node to unzip the detrend file and another one to unzip the anatomy image, before we can feed it to the model specification node.
###Code
from nipype.algorithms.misc import Gunzip
# Initiate the two Gunzip node here
gunzip_anat = Node(Gunzip(), name='gunzip_anat')
gunzip_func = Node(Gunzip(), name='gunzip_func')
###Output
_____no_output_____
###Markdown
And as a final step, we just need to connect this `SelectFiles` node to the rest of the workflow.
###Code
# Connect SelectFiles node to the other nodes here
analysis1st.connect([(sf, gunzip_anat, [('anat', 'in_file')]),
(sf, gunzip_func, [('func', 'in_file')]),
(gunzip_anat, normalize, [('out_file', 'image_to_align')]),
(gunzip_func, modelspec, [('out_file', 'functional_runs')]),
(sf, modelspec, [('mc_param', 'realignment_parameters'),
('outliers', 'outlier_files'),
])
])
###Output
_____no_output_____
###Markdown
Data output with `DataSink`Now, before we run the workflow, let's again specify a `Datasink` folder to only keep those files that we want to keep.
###Code
from nipype.interfaces.io import DataSink
# Initiate DataSink node here
# Initiate the datasink node
output_folder = 'datasink_handson'
datasink = Node(DataSink(base_directory='/output/',
container=output_folder),
name="datasink")
## Use the following substitutions for the DataSink output
substitutions = [('_subj_id_', 'sub-')]
datasink.inputs.substitutions = substitutions
###Output
_____no_output_____
###Markdown
Now the next step is to specify all the output that we want to keep in our output folder `output`. Probably best to keep are the:- SPM.mat file and the spmT and spmF files from the contrast estimation node- normalized betas and anatomy
###Code
# Connect nodes to datasink here
analysis1st.connect([(level1conest, datasink, [('spm_mat_file', '1stLevel.@spm_mat'),
('spmT_images', '1stLevel.@T'),
('spmF_images', '1stLevel.@F'),
]),
(normalize, datasink, [('normalized_files', 'normalized.@files'),
('normalized_image', 'normalized.@image'),
]),
])
###Output
_____no_output_____
###Markdown
Visualize the workflowNow that the workflow is finished, let's visualize it again.
###Code
# Create 1st-level analysis output graph
analysis1st.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename='/output/work_1st/graph.png')
###Output
_____no_output_____
###Markdown
Run the WorkflowNow that everything is ready, we can run the 1st-level analysis workflow. Change ``n_procs`` to the number of jobs/cores you want to use.
###Code
analysis1st.run('MultiProc', plugin_args={'n_procs': 4})
###Output
_____no_output_____
###Markdown
Visualize results
###Code
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
First, let's look at the 1st-level Design Matrix of subject one, to verify that everything is as it should be.
###Code
from scipy.io import loadmat
# Using scipy's loadmat function we can access SPM.mat
spmmat = loadmat('/output/datasink_handson/1stLevel/sub-07/SPM.mat',
struct_as_record=False)
###Output
_____no_output_____
###Markdown
The design matrix and the names of the regressors are a bit hidden in the `spmmat` variable, but they can be accessed as follows:
###Code
designMatrix = spmmat['SPM'][0][0].xX[0][0].X
names = [i[0] for i in spmmat['SPM'][0][0].xX[0][0].name[0]]
###Output
_____no_output_____
###Markdown
Now before we can plot it, we just need to normalize the desing matrix in such a way, that each column has a maximum amplitude of 1. This is just for visualization purposes, otherwise the rotation parameters with their rather small values will not show up in the figure.
###Code
normed_design = designMatrix / np.abs(designMatrix).max(axis=0)
###Output
_____no_output_____
###Markdown
And we're ready to plot the design matrix.
###Code
fig, ax = plt.subplots(figsize=(8, 8))
plt.imshow(normed_design, aspect='auto', cmap='gray', interpolation='none')
ax.set_ylabel('Volume id')
ax.set_xticks(np.arange(len(names)))
ax.set_xticklabels(names, rotation=90);
###Output
_____no_output_____
###Markdown
Now that we're happy with the design matrix, let's look how well the normalization worked.
###Code
import nibabel as nb
from nilearn.plotting import plot_anat
from nilearn.plotting import plot_glass_brain
# Load GM probability map of TPM.nii
img = nb.load('/opt/spm12-r7219/spm12_mcr/spm12/tpm/TPM.nii')
GM_template = nb.Nifti1Image(img.get_data()[..., 0], img.affine, img.header)
# Plot normalized subject anatomy
display = plot_anat('/output/datasink_handson/normalized/sub-07/wsub-07_ses-test_T1w.nii',
dim=-0.1)
# Overlay in edges GM map
display.add_edges(GM_template)
###Output
_____no_output_____
###Markdown
Let's look at the contrasts of one subject that we've just computed. In particular the F-contrast.
###Code
plot_glass_brain('/output/datasink_handson/normalized/sub-07/wess_0008.nii',
colorbar=True, display_mode='lyrz', black_bg=True, threshold=25,
title='subject 7 - F-contrast: Activation');
plot_glass_brain('/output/datasink_handson/normalized/sub-07/wess_0009.nii',
colorbar=True, display_mode='lyrz', black_bg=True, threshold=25,
title='subject 7 - F-contrast: Differences');
###Output
_____no_output_____
###Markdown
2nd-level Analysis Workflow StructureLast but not least, the group level analysis. This example will also directly include thresholding of the output, as well as some visualization. ImportsTo make sure that the necessary imports are done, here they are again:
###Code
# Get the Node and Workflow object
from nipype import Node, Workflow
# Specify which SPM to use
from nipype.interfaces.matlab import MatlabCommand
MatlabCommand.set_default_paths('/opt/spm12-r7219/spm12_mcr/spm12')
###Output
_____no_output_____
###Markdown
Create Nodes and Workflow connectionsNow we should know this part very well. Workflow for the 2nd-level analysis
###Code
# Create the workflow here
# Hint: use 'base_dir' to specify where to store the working directory
analysis2nd = Workflow(name='work_2nd', base_dir='/output/')
###Output
_____no_output_____
###Markdown
2nd-Level DesignThis step depends on your study design and the tests you want to perform. If you're using SPM to do the group analysis, you have the liberty to choose between a factorial design, a multiple regression design, one-sample T-Test design, a paired T-Test design or a two-sample T-Test design.For the current example, we will be using a one sample T-Test design.
###Code
from nipype.interfaces.spm import OneSampleTTestDesign
# Initiate the OneSampleTTestDesign node here
onesamplettestdes = Node(OneSampleTTestDesign(), name="onesampttestdes")
###Output
_____no_output_____
###Markdown
The next two steps are the same as for the 1st-level design, i.e. estimation of the model followed by estimation of the contrasts.
###Code
from nipype.interfaces.spm import EstimateModel, EstimateContrast
# Initiate the EstimateModel and the EstimateContrast node here
level2estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
name="level2estimate")
level2conestimate = Node(EstimateContrast(group_contrast=True),
name="level2conestimate")
###Output
_____no_output_____
###Markdown
To finish the `EstimateContrast` node, we also need to specify which contrast should be computed. For a 2nd-level one sample t-test design, this is rather straightforward:
###Code
cont01 = ['Group', 'T', ['mean'], [1]]
level2conestimate.inputs.contrasts = [cont01]
###Output
_____no_output_____
###Markdown
Now, let's connect those three design nodes to each other.
###Code
# Connect OneSampleTTestDesign, EstimateModel and EstimateContrast here
analysis2nd.connect([(onesamplettestdes, level2estimate, [('spm_mat_file',
'spm_mat_file')]),
(level2estimate, level2conestimate, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')])
])
###Output
_____no_output_____
###Markdown
Thresholding of output contrastAnd to close, we will use SPM `Threshold`. With this routine, we can set a specific voxel threshold (i.e. *p*<0.001) and apply an FDR cluster threshold (i.e. *p*<0.05).As we only have 5 subjects, I recommend to set the voxel threshold to 0.01 and to leave the cluster threshold at 0.05.
###Code
from nipype.interfaces.spm import Threshold
level2thresh = Node(Threshold(contrast_index=1,
use_topo_fdr=True,
use_fwe_correction=False,
extent_threshold=0,
height_threshold=0.01,
height_threshold_type='p-value',
extent_fdr_p_threshold=0.05),
name="level2thresh")
# Connect the Threshold node to the EstimateContrast node here
analysis2nd.connect([(level2conestimate, level2thresh, [('spm_mat_file',
'spm_mat_file'),
('spmT_images',
'stat_image'),
])
])
###Output
_____no_output_____
###Markdown
Gray Matter MaskWe could run our 2nd-level workflow as it is. All the major nodes are there. But I nonetheless suggest that we use a gray matter mask to restrict the analysis to only gray matter voxels.In the 1st-level analysis, we normalized to SPM12's `TPM.nii` tissue probability atlas. Therefore, we could just take the gray matter probability map of this `TPM.nii` image (the first volume) and threshold it at a certain probability value to get a binary mask. This can of course also all be done in Nipype, but sometimes the direct bash code is quicker:
###Code
%%bash
TEMPLATE='/opt/spm12-r7219/spm12_mcr/spm12/tpm/TPM.nii'
# Extract the first volume with `fslroi`
fslroi $TEMPLATE GM_PM.nii.gz 0 1
# Threshold the probability mask at 10%
fslmaths GM_PM.nii -thr 0.10 -bin /output/datasink_handson/GM_mask.nii.gz
# Unzip the mask and delete the GM_PM.nii file
gunzip /output/datasink_handson/GM_mask.nii.gz
rm GM_PM.nii.gz
###Output
_____no_output_____
###Markdown
Let's take a look at this mask:
###Code
from nilearn.plotting import plot_anat
%matplotlib inline
plot_anat('/output/datasink_handson/GM_mask.nii', dim=-1)
###Output
_____no_output_____
###Markdown
Now we just need to specify this binary mask as an `explicit_mask_file` for the one sample T-test node.
###Code
onesamplettestdes.inputs.explicit_mask_file = '/output/datasink_handson/GM_mask.nii'
###Output
_____no_output_____
###Markdown
Datainput with `SelectFiles` and `iterables` We will again be using [`SelectFiles`](../../../nipype_tutorial/notebooks/basic_data_input.ipynbSelectFiles) and [`iterables`](../../../nipype_tutorial/notebooks/basic_iteration.ipynb).So, what do we need? Actually, just the 1st-level contrasts of all subjects, separated by contrast number.
###Code
# Import the SelectFiles
from nipype import SelectFiles
# String template with {}-based strings
templates = {'cons': '/output/datasink_handson/normalized/sub-*/w*_{cont_id}.nii'}
# Create SelectFiles node
sf = Node(SelectFiles(templates, sort_filelist=True),
name='selectfiles')
###Output
_____no_output_____
###Markdown
We are using `*` to tell `SelectFiles` that it can grab all available subjects and any contrast, with a specific contrast id, independnet if it's an t-contrast (`con`) or an F-contrast (`ess`) contrast.So, let's specify over which contrast the workflow should iterate.
###Code
# list of contrast identifiers
contrast_id_list = ['0001', '0002', '0003', '0004', '0005',
'0006', '0007', '0008', '0009']
sf.iterables = [('cont_id', contrast_id_list)]
###Output
_____no_output_____
###Markdown
Now we need to connect the `SelectFiles` to the `OneSampleTTestDesign` node.
###Code
analysis2nd.connect([(sf, onesamplettestdes, [('cons', 'in_files')])])
###Output
_____no_output_____
###Markdown
Data output with `DataSink`Now, before we run the workflow, let's again specify a `Datasink` folder to only keep those files that we want to keep.
###Code
from nipype.interfaces.io import DataSink
# Initiate DataSink node here
# Initiate the datasink node
output_folder = 'datasink_handson'
datasink = Node(DataSink(base_directory='/output/',
container=output_folder),
name="datasink")
## Use the following substitutions for the DataSink output
substitutions = [('_cont_id_', 'con_')]
datasink.inputs.substitutions = substitutions
###Output
_____no_output_____
###Markdown
Now the next step is to specify all the output that we want to keep in our output folder `output`. Probably best to keep are the:- the SPM.mat file and the spmT images from the `EstimateContrast` node- the thresholded spmT images from the `Threshold` node
###Code
# Connect nodes to datasink here
analysis2nd.connect([(level2conestimate, datasink, [('spm_mat_file',
'2ndLevel.@spm_mat'),
('spmT_images',
'2ndLevel.@T'),
('con_images',
'2ndLevel.@con')]),
(level2thresh, datasink, [('thresholded_map',
'2ndLevel.@threshold')])
])
###Output
_____no_output_____
###Markdown
Visualize the workflowAnd we're good to go. Let's first take a look at the workflow.
###Code
# Create 1st-level analysis output graph
analysis2nd.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename='/output/work_2nd/graph.png')
###Output
_____no_output_____
###Markdown
Run the WorkflowNow that everything is ready, we can run the 2nd-level analysis workflow. Change ``n_procs`` to the number of jobs/cores you want to use.
###Code
analysis2nd.run('MultiProc', plugin_args={'n_procs': 4})
###Output
_____no_output_____
###Markdown
Visualize resultsLet's take a look at the results. Keep in mind that we only have *`N=6`* subjects and that we set the voxel threshold to a very liberal `p<0.01`. Interpretation of the results should, therefore, be taken with a lot of caution.
###Code
from nilearn.plotting import plot_glass_brain
%matplotlib inline
out_path = '/output/datasink_handson/2ndLevel/'
plot_glass_brain(out_path + 'con_0001/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='average (FDR corrected)');
plot_glass_brain(out_path + 'con_0002/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Finger (FDR corrected)');
plot_glass_brain(out_path + 'con_0003/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Foot (FDR corrected)');
plot_glass_brain(out_path + 'con_0004/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Lips (FDR corrected)');
plot_glass_brain(out_path + 'con_0005/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Finger < others (FDR corrected)');
plot_glass_brain(out_path + 'con_0006/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Foot < others (FDR corrected)');
plot_glass_brain(out_path + 'con_0007/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Lips > others (FDR corrected)');
###Output
_____no_output_____
###Markdown
Hands-on 2: How to create a fMRI analysis workflowThe purpose of this section is that you setup a complete fMRI analysis workflow yourself. So that in the end you are able to perform the analysis from A-Z, i.e. from preprocessing to group analysis. This section will cover the analysis part, the previous section [Hands-on 1: Preprocessing](handson_preprocessing.ipynb) handles the preprocessing part.We will use this opportunity to show you some nice additional interfaces/nodes that might not be relevant to your usual analysis. But it's always nice to know that they exist. And hopefully this will encourage you to investigate all other interfaces that Nipype can bring to the tip of your finger.Important: You will not be able to go through this notebook if you haven't preprocessed your subjects first. 1st-level Analysis Workflow StructureIn this notebook we will create a workflow that performs 1st-level analysis and normalizes the resulting beta weights to the MNI template. In concrete steps this means: 1. Specify 1st-level model parameters 2. Specify 1st-level contrasts 3. Estimate 1st-level contrasts 4. Normalize 1st-level contrasts ImportsIt's always best to have all relevant module imports at the beginning of your script. So let's import what we most certainly need.
###Code
# Get the Node and Workflow object
from nipype import Node, Workflow
# Specify which SPM to use
from nipype.interfaces.matlab import MatlabCommand
MatlabCommand.set_default_paths('/opt/spm12-dev/spm12_mcr/spm/spm12')
###Output
_____no_output_____
###Markdown
**Note:** Ideally you would also put the imports of all the interfaces that you use here at the top. But as we will develop the workflow step by step, we can also import the relevant modules as we go. Create Nodes and Workflow connectionsLet's create all the nodes that we need! Make sure to specify all relevant inputs and keep in mind which ones you later on need to connect in your pipeline. Workflow for the 1st-level analysisWe recommend to create the workflow and establish all it's connections at a later place in your script. This helps to have everything nicely together. But for this hands-on example it makes sense to establish the connections between the nodes as we go.And for this, we first need to create a workflow:
###Code
# Create the workflow here
# Hint: use 'base_dir' to specify where to store the working directory
analysis1st = Workflow(name='work_1st', base_dir='/output/')
###Output
_____no_output_____
###Markdown
Specify 1st-level model parameters (stimuli onsets, duration, etc.) The specify the 1st-level model we need the subject specific onset times and durations of the stimuli. Luckily, as we are working with a BIDS dataset, this information is nicely stored in a `tsv` file:
###Code
import pandas as pd
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
trialinfo
###Output
_____no_output_____
###Markdown
Using pandas is probably the quickest and easiest ways to aggregate stimuli information per condition.
###Code
for group in trialinfo.groupby('trial_type'):
print(group)
print("")
###Output
_____no_output_____
###Markdown
To create a GLM model, Nipype needs an list of `Bunch` objects per session. As we only have one session, our object needs to look as follows: [Bunch(conditions=['Finger', 'Foot', 'Lips'], durations=[[15.0, 15.0, 15.0, 15.0, 15.0], [15.0, 15.0, 15.0, 15.0, 15.0], [15.0, 15.0, 15.0, 15.0, 15.0]], onsets=[[10, 100, 190, 280, 370], [40, 130, 220, 310, 400], [70, 160, 250, 340, 430]] )]For more information see either the [official documnetation](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.algorithms.modelgen.html) or the [nipype_tutorial example](https://miykael.github.io/nipype_tutorial/notebooks/example_1stlevel.htmlSpecify-GLM-Model).So, let's create this Bunch object that we then can use for the GLM model.
###Code
import pandas as pd
from nipype.interfaces.base import Bunch
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
conditions = []
onsets = []
durations = []
for group in trialinfo.groupby('trial_type'):
conditions.append(group[0])
onsets.append(list(group[1].onset -10)) # subtracting 10s due to removing of 4 dummy scans
durations.append(group[1].duration.tolist())
subject_info = [Bunch(conditions=conditions,
onsets=onsets,
durations=durations,
)]
subject_info
###Output
_____no_output_____
###Markdown
Good! Now we can create the node that will create the SPM model. For this we will be using `SpecifySPMModel`. As a reminder the TR of the acquisition is 2.5s and we want to use a high pass filter of 128.
###Code
from nipype.algorithms.modelgen import SpecifySPMModel
# Initiate the SpecifySPMModel node here
modelspec = Node(SpecifySPMModel(concatenate_runs=False,
input_units='secs',
output_units='secs',
time_repetition=2.5,
high_pass_filter_cutoff=128,
subject_info=subject_info),
name="modelspec")
###Output
_____no_output_____
###Markdown
This node will also need some additional inputs, such as the preprocessed functional images, the motion parameters etc. We will specify those once we take care of the workflow data input stream. Specify 1st-level contrastsTo do any GLM analysis, we need to also define the contrasts that we want to investigate. If we recap, we had three different conditions in the **fingerfootlips** task in this dataset:- **finger**- **foot**- **lips**Therefore, we could create the following contrasts (seven T-contrasts and two F-contrasts):
###Code
# Condition names
condition_names = ['Finger', 'Foot', 'Lips']
# Contrasts
cont01 = ['average', 'T', condition_names, [1/3., 1/3., 1/3.]]
cont02 = ['Finger', 'T', condition_names, [1, 0, 0]]
cont03 = ['Foot', 'T', condition_names, [0, 1, 0]]
cont04 = ['Lips', 'T', condition_names, [0, 0, 1]]
cont05 = ['Finger < others','T', condition_names, [-1, 0.5, 0.5]]
cont06 = ['Foot < others', 'T', condition_names, [0.5, -1, 0.5]]
cont07 = ['Lips > others', 'T', condition_names, [-0.5, -0.5, 1]]
cont08 = ['activation', 'F', [cont02, cont03, cont04]]
cont09 = ['differences', 'F', [cont05, cont06, cont07]]
contrast_list = [cont01, cont02, cont03, cont04, cont05, cont06, cont07, cont08, cont09]
###Output
_____no_output_____
###Markdown
Estimate 1st-level contrastsBefore we can estimate the 1st-level contrasts, we first need to create the 1st-level design. Here you can also specify what kind of basis function you want (HRF, FIR, Fourier, etc.), if you want to use time and dispersion derivatives and how you want to model the serial correlation.In this example I propose that you use an HRF basis function, that we model time derivatives and that we model the serial correlation with AR(1).
###Code
from nipype.interfaces.spm import Level1Design
# Initiate the Level1Design node here
level1design = Node(Level1Design(bases={'hrf': {'derivs': [1, 0]}},
timing_units='secs',
interscan_interval=2.5,
model_serial_correlations='AR(1)'),
name="level1design")
###Output
_____no_output_____
###Markdown
Now that we have the Model Specification and 1st-Level Design node, we can connect them to each other:
###Code
# Connect the two nodes here
analysis1st.connect([(modelspec, level1design, [('session_info',
'session_info')])])
###Output
_____no_output_____
###Markdown
Now we need to estimate the model. I recommend that you'll use a `Classical: 1` method to estimate the model.
###Code
from nipype.interfaces.spm import EstimateModel
# Initiate the EstimateModel node here
level1estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
name="level1estimate")
###Output
_____no_output_____
###Markdown
Now we can connect the 1st-Level Design node with the model estimation node.
###Code
# Connect the two nodes here
analysis1st.connect([(level1design, level1estimate, [('spm_mat_file',
'spm_mat_file')])])
###Output
_____no_output_____
###Markdown
Now that we estimate the model, we can estimate the contrasts. Don't forget to feed the list of contrast we specify above to this node.
###Code
from nipype.interfaces.spm import EstimateContrast
# Initiate the EstimateContrast node here
level1conest = Node(EstimateContrast(contrasts=contrast_list),
name="level1conest")
###Output
_____no_output_____
###Markdown
Now we can connect the model estimation node with the contrast estimation node.
###Code
# Connect the two nodes here
analysis1st.connect([(level1estimate, level1conest, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')])])
###Output
_____no_output_____
###Markdown
Normalize 1st-level contrastsNow that the contrasts were estimated in subject space we can put them into a common reference space by normalizing them to a specific template. In this case we will be using SPM12's Normalize routine and normalize to the SPM12 tissue probability map `TPM.nii`.At this step you can also specify the voxel resolution of the output volumes. If you don't specify it, it will normalize to a voxel resolution of 2x2x2mm. As a training exercise, set the voxel resolution to 4x4x4mm.
###Code
from nipype.interfaces.spm import Normalize12
# Location of the template
template = '/opt/spm12-dev/spm12_mcr/spm/spm12/tpm/TPM.nii'
# Initiate the Normalize12 node here
normalize = Node(Normalize12(jobtype='estwrite',
tpm=template,
write_voxel_sizes=[4, 4, 4]
),
name="normalize")
###Output
_____no_output_____
###Markdown
Now we can connect the estimated contrasts to normalization node.
###Code
# Connect the nodes here
analysis1st.connect([(level1conest, normalize, [('con_images',
'apply_to_files')])
])
###Output
_____no_output_____
###Markdown
Datainput with `SelectFiles` and `iterables` As in the preprocessing hands-on, we will again be using [`SelectFiles`](../../../nipype_tutorial/notebooks/basic_data_input.ipynbSelectFiles) and [`iterables`](../../../nipype_tutorial/notebooks/basic_iteration.ipynb). So, what do we need?From the preprocessing pipeline, we need the functional images, the motion parameters and the list of outliers. Also, for the normalization we need the subject specific anatomy.
###Code
# Import the SelectFiles
from nipype import SelectFiles
# String template with {}-based strings
templates = {'anat': '/data/ds000114/sub-{subj_id}/ses-test/anat/sub-{subj_id}_ses-test_T1w.nii.gz',
'func': '/output/datasink_handson/preproc/sub-{subj_id}_detrend.nii.gz',
'mc_param': '/output/datasink_handson/preproc/sub-{subj_id}.par',
'outliers': '/output/datasink_handson/preproc/art.sub-{subj_id}_outliers.txt'
}
# Create SelectFiles node
sf = Node(SelectFiles(templates, sort_filelist=True),
name='selectfiles')
###Output
_____no_output_____
###Markdown
Now we can specify over which subjects the workflow should iterate. As we preprocessed only subjects 1 to 5, we can only them for this analysis.
###Code
# list of subject identifiers
subject_list = ['02', '03', '04', '07', '08', '09']
sf.iterables = [('subj_id', subject_list)]
###Output
_____no_output_____
###Markdown
Gunzip Node SPM12 can accept NIfTI files as input, but online if they are not compressed ('unzipped'). Therefore, we need to use a `Gunzip` node to unzip the detrend file and another one to unzip the anatomy image, before we can feed it to the model specification node.
###Code
from nipype.algorithms.misc import Gunzip
# Initiate the two Gunzip node here
gunzip_anat = Node(Gunzip(), name='gunzip_anat')
gunzip_func = Node(Gunzip(), name='gunzip_func')
###Output
_____no_output_____
###Markdown
And as a final step, we just need to connect this `SelectFiles` node to the rest of the workflow.
###Code
# Connect SelectFiles node to the other nodes here
analysis1st.connect([(sf, gunzip_anat, [('anat', 'in_file')]),
(sf, gunzip_func, [('func', 'in_file')]),
(gunzip_anat, normalize, [('out_file', 'image_to_align')]),
(gunzip_func, modelspec, [('out_file', 'functional_runs')]),
(sf, modelspec, [('mc_param', 'realignment_parameters'),
('outliers', 'outlier_files'),
])
])
###Output
_____no_output_____
###Markdown
Data output with `DataSink`Now, before we run the workflow, let's again specify a `Datasink` folder to only keep those files that we want to keep.
###Code
from nipype.interfaces.io import DataSink
# Initiate DataSink node here
# Initiate the datasink node
output_folder = 'datasink_handson'
datasink = Node(DataSink(base_directory='/output/',
container=output_folder),
name="datasink")
## Use the following substitutions for the DataSink output
substitutions = [('_subj_id_', 'sub-')]
datasink.inputs.substitutions = substitutions
###Output
_____no_output_____
###Markdown
Now the next step is to specify all the output that we want to keep in our output folder `output`. Probably best to keep are the:- SPM.mat file and the spmT and spmF files from the contrast estimation node- normalized betas and anatomy
###Code
# Connect nodes to datasink here
analysis1st.connect([(level1conest, datasink, [('spm_mat_file', '1stLevel.@spm_mat'),
('spmT_images', '1stLevel.@T'),
('spmF_images', '1stLevel.@F'),
]),
(normalize, datasink, [('normalized_files', 'normalized.@files'),
('normalized_image', 'normalized.@image'),
]),
])
###Output
_____no_output_____
###Markdown
Visualize the workflowNow that the workflow is finished, let's visualize it again.
###Code
# Create 1st-level analysis output graph
analysis1st.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename='/output/work_1st/graph.png')
###Output
_____no_output_____
###Markdown
Run the WorkflowNow that everything is ready, we can run the 1st-level analysis workflow. Change ``n_procs`` to the number of jobs/cores you want to use.
###Code
analysis1st.run('MultiProc', plugin_args={'n_procs': 8})
###Output
_____no_output_____
###Markdown
Visualize results
###Code
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
First, let's look at the 1st-level Design Matrix of subject one, to verify that everything is as it should be.
###Code
from scipy.io import loadmat
# Using scipy's loadmat function we can access SPM.mat
spmmat = loadmat('/output/datasink_handson/1stLevel/sub-07/SPM.mat',
struct_as_record=False)
###Output
_____no_output_____
###Markdown
The design matrix and the names of the regressors are a bit hidden in the `spmmat` variable, but they can be accessed as follows:
###Code
designMatrix = spmmat['SPM'][0][0].xX[0][0].X
names = [i[0] for i in spmmat['SPM'][0][0].xX[0][0].name[0]]
###Output
_____no_output_____
###Markdown
Now before we can plot it, we just need to normalize the desing matrix in such a way, that each column has a maximum amplitude of 1. This is just for visualization purposes, otherwise the rotation parameters with their rather small values will not show up in the figure.
###Code
normed_design = designMatrix / np.abs(designMatrix).max(axis=0)
###Output
_____no_output_____
###Markdown
And we're ready to plot the design matrix.
###Code
fig, ax = plt.subplots(figsize=(8, 8))
plt.imshow(normed_design, aspect='auto', cmap='gray', interpolation='none')
ax.set_ylabel('Volume id')
ax.set_xticks(np.arange(len(names)))
ax.set_xticklabels(names, rotation=90);
###Output
_____no_output_____
###Markdown
Now that we're happy with the design matrix, let's look how well the normalization worked.
###Code
import nibabel as nb
from nilearn.plotting import plot_anat
from nilearn.plotting import plot_glass_brain
# Load GM probability map of TPM.nii
img = nb.load('/opt/spm12-dev/spm12_mcr/spm/spm12/tpm/TPM.nii')
GM_template = nb.Nifti1Image(img.get_data()[..., 0], img.affine, img.header)
# Plot normalized subject anatomy
display = plot_anat('/output/datasink_handson/normalized/sub-07/wsub-07_ses-test_T1w.nii',
dim=-0.1)
# Overlay in edges GM map
display.add_edges(GM_template)
###Output
_____no_output_____
###Markdown
Let's look at the contrasts of one subject that we've just computed. In particular the F-contrast.
###Code
plot_glass_brain('/output/datasink_handson/normalized/sub-07/wess_0008.nii',
colorbar=True, display_mode='lyrz', black_bg=True, threshold=25,
title='subject 7 - F-contrast: Activation');
plot_glass_brain('/output/datasink_handson/normalized/sub-07/wess_0009.nii',
colorbar=True, display_mode='lyrz', black_bg=True, threshold=25,
title='subject 7 - F-contrast: Differences');
###Output
_____no_output_____
###Markdown
2nd-level Analysis Workflow StructureLast but not least, the group level analysis. This example will also directly include thresholding of the output, as well as some visualization. ImportsTo make sure that the necessary imports are done, here they are again:
###Code
# Get the Node and Workflow object
from nipype import Node, Workflow
# Specify which SPM to use
from nipype.interfaces.matlab import MatlabCommand
MatlabCommand.set_default_paths('/opt/spm12-dev/spm12_mcr/spm/spm12')
###Output
_____no_output_____
###Markdown
Create Nodes and Workflow connectionsNow we should know this part very well. Workflow for the 2nd-level analysis
###Code
# Create the workflow here
# Hint: use 'base_dir' to specify where to store the working directory
analysis2nd = Workflow(name='work_2nd', base_dir='/output/')
###Output
_____no_output_____
###Markdown
2nd-Level DesignThis step depends on your study design and the tests you want to perform. If you're using SPM to do the group analysis, you have the liberty to choose between a factorial design, a multiple regression design, one sample T-Test design, a paired T-Test design or a two sample T-Test design.For the current example, we will be using a one sample T-Test design.
###Code
from nipype.interfaces.spm import OneSampleTTestDesign
# Initiate the OneSampleTTestDesign node here
onesamplettestdes = Node(OneSampleTTestDesign(), name="onesampttestdes")
###Output
_____no_output_____
###Markdown
The next two steps are the same as for the 1st-level design, i.e. estimation of the model followed by estimation of the contrasts.
###Code
from nipype.interfaces.spm import EstimateModel, EstimateContrast
# Initiate the EstimateModel and the EstimateContrast node here
level2estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
name="level2estimate")
level2conestimate = Node(EstimateContrast(group_contrast=True),
name="level2conestimate")
###Output
_____no_output_____
###Markdown
To finish the `EstimateContrast` node, we also need to specify which contrast should be computed. For a 2nd-level one sample t-test design, this is rather straight forward:
###Code
cont01 = ['Group', 'T', ['mean'], [1]]
level2conestimate.inputs.contrasts = [cont01]
###Output
_____no_output_____
###Markdown
Now, let's connect those three design nodes to each other.
###Code
# Connect OneSampleTTestDesign, EstimateModel and EstimateContrast here
analysis2nd.connect([(onesamplettestdes, level2estimate, [('spm_mat_file',
'spm_mat_file')]),
(level2estimate, level2conestimate, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')])
])
###Output
_____no_output_____
###Markdown
Thresholding of output contrastAnd to close, we will use SPM `Threshold`. With this routine, we can set a specific voxle threshold (i.e. *p*<0.001) and apply an FDR cluster threshold (i.e. *p*<0.05).As we only have 5 subjects, I recommend to set the voxel threshold to 0.01 and to leave the cluster threshold at 0.05.
###Code
from nipype.interfaces.spm import Threshold
level2thresh = Node(Threshold(contrast_index=1,
use_topo_fdr=True,
use_fwe_correction=False,
extent_threshold=0,
height_threshold=0.01,
height_threshold_type='p-value',
extent_fdr_p_threshold=0.05),
name="level2thresh")
# Connect the Threshold node to the EstimateContrast node herer
analysis2nd.connect([(level2conestimate, level2thresh, [('spm_mat_file',
'spm_mat_file'),
('spmT_images',
'stat_image'),
])
])
###Output
_____no_output_____
###Markdown
Gray Matter MaskWe could run our 2nd-level workflow as it is. All the major nodes are there. But I nonetheless suggest that we use a gray matter mask to restrict the analysis to only gray matter voxels.In the 1st-level analysis, we normalized to SPM12's `TPM.nii` tissue probability atlas. Therefore, we could just take the gray matter probability map of this `TPM.nii` image (the first volume) and threshold it at a certain probability value to get a binary mask. This can of course also all be done in Nipype, but sometimes the direct bash code is quicker:
###Code
%%bash
TEMPLATE='/opt/spm12-dev/spm12_mcr/spm/spm12/tpm/TPM.nii'
# Extract the first volume with `fslroi`
fslroi $TEMPLATE GM_PM.nii.gz 0 1
# Threshold the probability mask at 10%
fslmaths GM_PM.nii -thr 0.10 -bin /output/datasink_handson/GM_mask.nii.gz
# Unzip the mask and delete the GM_PM.nii file
gunzip /output/datasink_handson/GM_mask.nii.gz
rm GM_PM.nii.gz
###Output
_____no_output_____
###Markdown
Let's take a look at this mask:
###Code
import nibabel as nb
mask = nb.load('/output/datasink_handson/GM_mask.nii')
mask.orthoview()
###Output
_____no_output_____
###Markdown
Now we just need to specify this binary mask as an `explicit_mask_file` for the one sample T-test node.
###Code
onesamplettestdes.inputs.explicit_mask_file = '/output/datasink_handson/GM_mask.nii'
###Output
_____no_output_____
###Markdown
Datainput with `SelectFiles` and `iterables` We will again be using [`SelectFiles`](../../../nipype_tutorial/notebooks/basic_data_input.ipynbSelectFiles) and [`iterables`](../../../nipype_tutorial/notebooks/basic_iteration.ipynb).So, what do we need? Actually, just the 1st-level contrasts of all subjects, separated by contrast number.
###Code
# Import the SelectFiles
from nipype import SelectFiles
# String template with {}-based strings
templates = {'cons': '/output/datasink_handson/normalized/sub-*/w*_{cont_id}.nii'}
# Create SelectFiles node
sf = Node(SelectFiles(templates, sort_filelist=True),
name='selectfiles')
###Output
_____no_output_____
###Markdown
We are using `*` to tell `SelectFiles` that it can grab all available subjects and any contrast, with a specific contrast id, independnet if it's an t-contrast (`con`) or an F-contrast (`ess`) contrast.So, let's specify over which contrast the workflow should iterate.
###Code
# list of contrast identifiers
contrast_id_list = ['0001', '0002', '0003', '0004', '0005',
'0006', '0007', '0008', '0009']
sf.iterables = [('cont_id', contrast_id_list)]
###Output
_____no_output_____
###Markdown
Now we need to connect the `SelectFiles` to the `OneSampleTTestDesign` node.
###Code
analysis2nd.connect([(sf, onesamplettestdes, [('cons', 'in_files')])])
###Output
_____no_output_____
###Markdown
Data output with `DataSink`Now, before we run the workflow, let's again specify a `Datasink` folder to only keep those files that we want to keep.
###Code
from nipype.interfaces.io import DataSink
# Initiate DataSink node here
# Initiate the datasink node
output_folder = 'datasink_handson'
datasink = Node(DataSink(base_directory='/output/',
container=output_folder),
name="datasink")
## Use the following substitutions for the DataSink output
substitutions = [('_cont_id_', 'con_')]
datasink.inputs.substitutions = substitutions
###Output
_____no_output_____
###Markdown
Now the next step is to specify all the output that we want to keep in our output folder `output`. Probably best to keep are the:- the SPM.mat file and the spmT images from the `EstimateContrast` node- the thresholded spmT images from the `Threshold` node
###Code
# Connect nodes to datasink here
analysis2nd.connect([(level2conestimate, datasink, [('spm_mat_file',
'2ndLevel.@spm_mat'),
('spmT_images',
'2ndLevel.@T'),
('con_images',
'2ndLevel.@con')]),
(level2thresh, datasink, [('thresholded_map',
'2ndLevel.@threshold')])
])
###Output
_____no_output_____
###Markdown
Visualize the workflowAnd we're good to go. Let's first take a look at the workflow.
###Code
# Create 1st-level analysis output graph
analysis2nd.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename='/output/work_2nd/graph.png')
###Output
_____no_output_____
###Markdown
Run the WorkflowNow that everything is ready, we can run the 2nd-level analysis workflow. Change ``n_procs`` to the number of jobs/cores you want to use.
###Code
analysis2nd.run('MultiProc', plugin_args={'n_procs': 8})
###Output
_____no_output_____
###Markdown
Visualize resultsLet's take a look at the results. Keep in mind that we only have *`N=6`* subjects and that we set the voxel threshold to a very liberal `p<0.01`. Interpretation of the results should therefore be taken with a lot of caution.
###Code
from nilearn.plotting import plot_glass_brain
%matplotlib inline
out_path = '/output/datasink_handson/2ndLevel/'
plot_glass_brain(out_path + 'con_0001/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='average (FDR corrected)');
plot_glass_brain(out_path + 'con_0002/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Finger (FDR corrected)');
plot_glass_brain(out_path + 'con_0003/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Foot (FDR corrected)');
plot_glass_brain(out_path + 'con_0004/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Lips (FDR corrected)');
plot_glass_brain(out_path + 'con_0005/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Finger < others (FDR corrected)');
plot_glass_brain(out_path + 'con_0006/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Foot < others (FDR corrected)');
plot_glass_brain(out_path + 'con_0007/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Lips > others (FDR corrected)');
###Output
_____no_output_____
###Markdown
Hands-on 2: How to create a fMRI analysis workflowThe purpose of this section is that you set up a complete fMRI analysis workflow yourself. So that in the end, you are able to perform the analysis from A-Z, i.e. from preprocessing to group analysis. This section will cover the analysis part, the previous section [Hands-on 1: Preprocessing](handson_preprocessing.ipynb) handles the preprocessing part.We will use this opportunity to show you some nice additional interfaces/nodes that might not be relevant to your usual analysis. But it's always nice to know that they exist. And hopefully, this will encourage you to investigate all other interfaces that Nipype can bring to the tip of your finger.Important: You will not be able to go through this notebook if you haven't preprocessed your subjects first. 1st-level Analysis Workflow StructureIn this notebook, we will create a workflow that performs 1st-level analysis and normalizes the resulting beta weights to the MNI template. In concrete steps this means: 1. Specify 1st-level model parameters 2. Specify 1st-level contrasts 3. Estimate 1st-level contrasts 4. Normalize 1st-level contrasts ImportsIt's always best to have all relevant module imports at the beginning of your script. So let's import what we most certainly need.
###Code
from nilearn import plotting
%matplotlib inline
# Get the Node and Workflow object
from nipype import Node, Workflow
# Specify which SPM to use
from nipype.interfaces.matlab import MatlabCommand
MatlabCommand.set_default_paths('/opt/spm12-dev/spm12_mcr/spm/spm12')
###Output
_____no_output_____
###Markdown
**Note:** Ideally you would also put the imports of all the interfaces that you use here at the top. But as we will develop the workflow step by step, we can also import the relevant modules as we go. Create Nodes and Workflow connectionsLet's create all the nodes that we need! Make sure to specify all relevant inputs and keep in mind which ones you later on need to connect in your pipeline. Workflow for the 1st-level analysisWe recommend to create the workflow and establish all its connections at a later place in your script. This helps to have everything nicely together. But for this hands-on example, it makes sense to establish the connections between the nodes as we go.And for this, we first need to create a workflow:
###Code
# Create the workflow here
# Hint: use 'base_dir' to specify where to store the working directory
analysis1st = Workflow(name='work_1st', base_dir='/output/')
###Output
_____no_output_____
###Markdown
Specify 1st-level model parameters (stimuli onsets, duration, etc.) The specify the 1st-level model we need the subject-specific onset times and duration of the stimuli. Luckily, as we are working with a BIDS dataset, this information is nicely stored in a `tsv` file:
###Code
import pandas as pd
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
trialinfo
###Output
_____no_output_____
###Markdown
Using pandas is probably the quickest and easiest ways to aggregate stimuli information per condition.
###Code
for group in trialinfo.groupby('trial_type'):
print(group)
print("")
###Output
_____no_output_____
###Markdown
To create a GLM model, Nipype needs an list of `Bunch` objects per session. As we only have one session, our object needs to look as follows: [Bunch(conditions=['Finger', 'Foot', 'Lips'], durations=[[15.0, 15.0, 15.0, 15.0, 15.0], [15.0, 15.0, 15.0, 15.0, 15.0], [15.0, 15.0, 15.0, 15.0, 15.0]], onsets=[[10, 100, 190, 280, 370], [40, 130, 220, 310, 400], [70, 160, 250, 340, 430]] )]For more information see either the [official documnetation](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.algorithms.modelgen.html) or the [nipype_tutorial example](https://miykael.github.io/nipype_tutorial/notebooks/example_1stlevel.htmlSpecify-GLM-Model).So, let's create this Bunch object that we then can use for the GLM model.
###Code
import pandas as pd
from nipype.interfaces.base import Bunch
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
conditions = []
onsets = []
durations = []
for group in trialinfo.groupby('trial_type'):
conditions.append(group[0])
onsets.append(list(group[1].onset -10)) # subtracting 10s due to removing of 4 dummy scans
durations.append(group[1].duration.tolist())
subject_info = [Bunch(conditions=conditions,
onsets=onsets,
durations=durations,
)]
subject_info
###Output
_____no_output_____
###Markdown
Good! Now we can create the node that will create the SPM model. For this we will be using `SpecifySPMModel`. As a reminder the TR of the acquisition is 2.5s and we want to use a high pass filter of 128.
###Code
from nipype.algorithms.modelgen import SpecifySPMModel
# Initiate the SpecifySPMModel node here
modelspec = Node(SpecifySPMModel(concatenate_runs=False,
input_units='secs',
output_units='secs',
time_repetition=2.5,
high_pass_filter_cutoff=128,
subject_info=subject_info),
name="modelspec")
###Output
_____no_output_____
###Markdown
This node will also need some additional inputs, such as the preprocessed functional images, the motion parameters etc. We will specify those once we take care of the workflow data input stream. Specify 1st-level contrastsTo do any GLM analysis, we need to also define the contrasts that we want to investigate. If we recap, we had three different conditions in the **fingerfootlips** task in this dataset:- **finger**- **foot**- **lips**Therefore, we could create the following contrasts (seven T-contrasts and two F-contrasts):
###Code
# Condition names
condition_names = ['Finger', 'Foot', 'Lips']
# Contrasts
cont01 = ['average', 'T', condition_names, [1/3., 1/3., 1/3.]]
cont02 = ['Finger', 'T', condition_names, [1, 0, 0]]
cont03 = ['Foot', 'T', condition_names, [0, 1, 0]]
cont04 = ['Lips', 'T', condition_names, [0, 0, 1]]
cont05 = ['Finger < others','T', condition_names, [-1, 0.5, 0.5]]
cont06 = ['Foot < others', 'T', condition_names, [0.5, -1, 0.5]]
cont07 = ['Lips > others', 'T', condition_names, [-0.5, -0.5, 1]]
cont08 = ['activation', 'F', [cont02, cont03, cont04]]
cont09 = ['differences', 'F', [cont05, cont06, cont07]]
contrast_list = [cont01, cont02, cont03, cont04, cont05, cont06, cont07, cont08, cont09]
###Output
_____no_output_____
###Markdown
Estimate 1st-level contrastsBefore we can estimate the 1st-level contrasts, we first need to create the 1st-level design. Here you can also specify what kind of basis function you want (HRF, FIR, Fourier, etc.), if you want to use time and dispersion derivatives and how you want to model the serial correlation.In this example, I propose that you use an HRF basis function, that we model time derivatives and that we model the serial correlation with AR(1).
###Code
from nipype.interfaces.spm import Level1Design
# Initiate the Level1Design node here
level1design = Node(Level1Design(bases={'hrf': {'derivs': [1, 0]}},
timing_units='secs',
interscan_interval=2.5,
model_serial_correlations='AR(1)'),
name="level1design")
###Output
_____no_output_____
###Markdown
Now that we have the Model Specification and 1st-Level Design node, we can connect them to each other:
###Code
# Connect the two nodes here
analysis1st.connect([(modelspec, level1design, [('session_info',
'session_info')])])
###Output
_____no_output_____
###Markdown
Now we need to estimate the model. I recommend that you'll use a `Classical: 1` method to estimate the model.
###Code
from nipype.interfaces.spm import EstimateModel
# Initiate the EstimateModel node here
level1estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
name="level1estimate")
###Output
_____no_output_____
###Markdown
Now we can connect the 1st-Level Design node with the model estimation node.
###Code
# Connect the two nodes here
analysis1st.connect([(level1design, level1estimate, [('spm_mat_file',
'spm_mat_file')])])
###Output
_____no_output_____
###Markdown
Now that we estimate the model, we can estimate the contrasts. Don't forget to feed the list of contrast we specify above to this node.
###Code
from nipype.interfaces.spm import EstimateContrast
# Initiate the EstimateContrast node here
level1conest = Node(EstimateContrast(contrasts=contrast_list),
name="level1conest")
###Output
_____no_output_____
###Markdown
Now we can connect the model estimation node with the contrast estimation node.
###Code
# Connect the two nodes here
analysis1st.connect([(level1estimate, level1conest, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')])])
###Output
_____no_output_____
###Markdown
Normalize 1st-level contrastsNow that the contrasts were estimated in subject space we can put them into a common reference space by normalizing them to a specific template. In this case, we will be using SPM12's Normalize routine and normalize to the SPM12 tissue probability map `TPM.nii`.At this step, you can also specify the voxel resolution of the output volumes. If you don't specify it, it will normalize to a voxel resolution of 2x2x2mm. As a training exercise, set the voxel resolution to 4x4x4mm.
###Code
from nipype.interfaces.spm import Normalize12
# Location of the template
template = '/opt/spm12-dev/spm12_mcr/spm/spm12/tpm/TPM.nii'
# Initiate the Normalize12 node here
normalize = Node(Normalize12(jobtype='estwrite',
tpm=template,
write_voxel_sizes=[4, 4, 4]
),
name="normalize")
###Output
_____no_output_____
###Markdown
Now we can connect the estimated contrasts to normalization node.
###Code
# Connect the nodes here
analysis1st.connect([(level1conest, normalize, [('con_images',
'apply_to_files')])
])
###Output
_____no_output_____
###Markdown
Datainput with `SelectFiles` and `iterables` As in the preprocessing hands-on, we will again be using [`SelectFiles`](../../../nipype_tutorial/notebooks/basic_data_input.ipynbSelectFiles) and [`iterables`](../../../nipype_tutorial/notebooks/basic_iteration.ipynb). So, what do we need?From the preprocessing pipeline, we need the functional images, the motion parameters and the list of outliers. Also, for the normalization, we need the subject-specific anatomy.
###Code
# Import the SelectFiles
from nipype import SelectFiles
# String template with {}-based strings
templates = {'anat': '/data/ds000114/sub-{subj_id}/ses-test/anat/sub-{subj_id}_ses-test_T1w.nii.gz',
'func': '/output/datasink_handson/preproc/sub-{subj_id}_detrend.nii.gz',
'mc_param': '/output/datasink_handson/preproc/sub-{subj_id}.par',
'outliers': '/output/datasink_handson/preproc/art.sub-{subj_id}_outliers.txt'
}
# Create SelectFiles node
sf = Node(SelectFiles(templates, sort_filelist=True),
name='selectfiles')
###Output
_____no_output_____
###Markdown
Now we can specify over which subjects the workflow should iterate. As we preprocessed only subjects 1 to 5, we can only them for this analysis.
###Code
# list of subject identifiers
subject_list = ['02', '03', '04', '07', '08', '09']
sf.iterables = [('subj_id', subject_list)]
###Output
_____no_output_____
###Markdown
Gunzip Node SPM12 can accept NIfTI files as input, but online if they are not compressed ('unzipped'). Therefore, we need to use a `Gunzip` node to unzip the detrend file and another one to unzip the anatomy image, before we can feed it to the model specification node.
###Code
from nipype.algorithms.misc import Gunzip
# Initiate the two Gunzip node here
gunzip_anat = Node(Gunzip(), name='gunzip_anat')
gunzip_func = Node(Gunzip(), name='gunzip_func')
###Output
_____no_output_____
###Markdown
And as a final step, we just need to connect this `SelectFiles` node to the rest of the workflow.
###Code
# Connect SelectFiles node to the other nodes here
analysis1st.connect([(sf, gunzip_anat, [('anat', 'in_file')]),
(sf, gunzip_func, [('func', 'in_file')]),
(gunzip_anat, normalize, [('out_file', 'image_to_align')]),
(gunzip_func, modelspec, [('out_file', 'functional_runs')]),
(sf, modelspec, [('mc_param', 'realignment_parameters'),
('outliers', 'outlier_files'),
])
])
###Output
_____no_output_____
###Markdown
Data output with `DataSink`Now, before we run the workflow, let's again specify a `Datasink` folder to only keep those files that we want to keep.
###Code
from nipype.interfaces.io import DataSink
# Initiate DataSink node here
# Initiate the datasink node
output_folder = 'datasink_handson'
datasink = Node(DataSink(base_directory='/output/',
container=output_folder),
name="datasink")
## Use the following substitutions for the DataSink output
substitutions = [('_subj_id_', 'sub-')]
datasink.inputs.substitutions = substitutions
###Output
_____no_output_____
###Markdown
Now the next step is to specify all the output that we want to keep in our output folder `output`. Probably best to keep are the:- SPM.mat file and the spmT and spmF files from the contrast estimation node- normalized betas and anatomy
###Code
# Connect nodes to datasink here
analysis1st.connect([(level1conest, datasink, [('spm_mat_file', '1stLevel.@spm_mat'),
('spmT_images', '1stLevel.@T'),
('spmF_images', '1stLevel.@F'),
]),
(normalize, datasink, [('normalized_files', 'normalized.@files'),
('normalized_image', 'normalized.@image'),
]),
])
###Output
_____no_output_____
###Markdown
Visualize the workflowNow that the workflow is finished, let's visualize it again.
###Code
# Create 1st-level analysis output graph
analysis1st.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename='/output/work_1st/graph.png')
###Output
_____no_output_____
###Markdown
Run the WorkflowNow that everything is ready, we can run the 1st-level analysis workflow. Change ``n_procs`` to the number of jobs/cores you want to use.
###Code
analysis1st.run('MultiProc', plugin_args={'n_procs': 4})
###Output
_____no_output_____
###Markdown
Visualize results
###Code
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
First, let's look at the 1st-level Design Matrix of subject one, to verify that everything is as it should be.
###Code
from scipy.io import loadmat
# Using scipy's loadmat function we can access SPM.mat
spmmat = loadmat('/output/datasink_handson/1stLevel/sub-07/SPM.mat',
struct_as_record=False)
###Output
_____no_output_____
###Markdown
The design matrix and the names of the regressors are a bit hidden in the `spmmat` variable, but they can be accessed as follows:
###Code
designMatrix = spmmat['SPM'][0][0].xX[0][0].X
names = [i[0] for i in spmmat['SPM'][0][0].xX[0][0].name[0]]
###Output
_____no_output_____
###Markdown
Now before we can plot it, we just need to normalize the desing matrix in such a way, that each column has a maximum amplitude of 1. This is just for visualization purposes, otherwise the rotation parameters with their rather small values will not show up in the figure.
###Code
normed_design = designMatrix / np.abs(designMatrix).max(axis=0)
###Output
_____no_output_____
###Markdown
And we're ready to plot the design matrix.
###Code
fig, ax = plt.subplots(figsize=(8, 8))
plt.imshow(normed_design, aspect='auto', cmap='gray', interpolation='none')
ax.set_ylabel('Volume id')
ax.set_xticks(np.arange(len(names)))
ax.set_xticklabels(names, rotation=90);
###Output
_____no_output_____
###Markdown
Now that we're happy with the design matrix, let's look how well the normalization worked.
###Code
import nibabel as nb
from nilearn.plotting import plot_anat
from nilearn.plotting import plot_glass_brain
# Load GM probability map of TPM.nii
img = nb.load('/opt/spm12-dev/spm12_mcr/spm/spm12/tpm/TPM.nii')
GM_template = nb.Nifti1Image(img.get_data()[..., 0], img.affine, img.header)
# Plot normalized subject anatomy
display = plot_anat('/output/datasink_handson/normalized/sub-07/wsub-07_ses-test_T1w.nii',
dim=-0.1)
# Overlay in edges GM map
display.add_edges(GM_template)
###Output
_____no_output_____
###Markdown
Let's look at the contrasts of one subject that we've just computed. In particular the F-contrast.
###Code
plot_glass_brain('/output/datasink_handson/normalized/sub-07/wess_0008.nii',
colorbar=True, display_mode='lyrz', black_bg=True, threshold=25,
title='subject 7 - F-contrast: Activation');
plot_glass_brain('/output/datasink_handson/normalized/sub-07/wess_0009.nii',
colorbar=True, display_mode='lyrz', black_bg=True, threshold=25,
title='subject 7 - F-contrast: Differences');
###Output
_____no_output_____
###Markdown
2nd-level Analysis Workflow StructureLast but not least, the group level analysis. This example will also directly include thresholding of the output, as well as some visualization. ImportsTo make sure that the necessary imports are done, here they are again:
###Code
# Get the Node and Workflow object
from nipype import Node, Workflow
# Specify which SPM to use
from nipype.interfaces.matlab import MatlabCommand
MatlabCommand.set_default_paths('/opt/spm12-dev/spm12_mcr/spm/spm12')
###Output
_____no_output_____
###Markdown
Create Nodes and Workflow connectionsNow we should know this part very well. Workflow for the 2nd-level analysis
###Code
# Create the workflow here
# Hint: use 'base_dir' to specify where to store the working directory
analysis2nd = Workflow(name='work_2nd', base_dir='/output/')
###Output
_____no_output_____
###Markdown
2nd-Level DesignThis step depends on your study design and the tests you want to perform. If you're using SPM to do the group analysis, you have the liberty to choose between a factorial design, a multiple regression design, one-sample T-Test design, a paired T-Test design or a two-sample T-Test design.For the current example, we will be using a one sample T-Test design.
###Code
from nipype.interfaces.spm import OneSampleTTestDesign
# Initiate the OneSampleTTestDesign node here
onesamplettestdes = Node(OneSampleTTestDesign(), name="onesampttestdes")
###Output
_____no_output_____
###Markdown
The next two steps are the same as for the 1st-level design, i.e. estimation of the model followed by estimation of the contrasts.
###Code
from nipype.interfaces.spm import EstimateModel, EstimateContrast
# Initiate the EstimateModel and the EstimateContrast node here
level2estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
name="level2estimate")
level2conestimate = Node(EstimateContrast(group_contrast=True),
name="level2conestimate")
###Output
_____no_output_____
###Markdown
To finish the `EstimateContrast` node, we also need to specify which contrast should be computed. For a 2nd-level one sample t-test design, this is rather straightforward:
###Code
cont01 = ['Group', 'T', ['mean'], [1]]
level2conestimate.inputs.contrasts = [cont01]
###Output
_____no_output_____
###Markdown
Now, let's connect those three design nodes to each other.
###Code
# Connect OneSampleTTestDesign, EstimateModel and EstimateContrast here
analysis2nd.connect([(onesamplettestdes, level2estimate, [('spm_mat_file',
'spm_mat_file')]),
(level2estimate, level2conestimate, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')])
])
###Output
_____no_output_____
###Markdown
Thresholding of output contrastAnd to close, we will use SPM `Threshold`. With this routine, we can set a specific voxel threshold (i.e. *p*<0.001) and apply an FDR cluster threshold (i.e. *p*<0.05).As we only have 5 subjects, I recommend to set the voxel threshold to 0.01 and to leave the cluster threshold at 0.05.
###Code
from nipype.interfaces.spm import Threshold
level2thresh = Node(Threshold(contrast_index=1,
use_topo_fdr=True,
use_fwe_correction=False,
extent_threshold=0,
height_threshold=0.01,
height_threshold_type='p-value',
extent_fdr_p_threshold=0.05),
name="level2thresh")
# Connect the Threshold node to the EstimateContrast node here
analysis2nd.connect([(level2conestimate, level2thresh, [('spm_mat_file',
'spm_mat_file'),
('spmT_images',
'stat_image'),
])
])
###Output
_____no_output_____
###Markdown
Gray Matter MaskWe could run our 2nd-level workflow as it is. All the major nodes are there. But I nonetheless suggest that we use a gray matter mask to restrict the analysis to only gray matter voxels.In the 1st-level analysis, we normalized to SPM12's `TPM.nii` tissue probability atlas. Therefore, we could just take the gray matter probability map of this `TPM.nii` image (the first volume) and threshold it at a certain probability value to get a binary mask. This can of course also all be done in Nipype, but sometimes the direct bash code is quicker:
###Code
%%bash
TEMPLATE='/opt/spm12-dev/spm12_mcr/spm/spm12/tpm/TPM.nii'
# Extract the first volume with `fslroi`
fslroi $TEMPLATE GM_PM.nii.gz 0 1
# Threshold the probability mask at 10%
fslmaths GM_PM.nii -thr 0.10 -bin /output/datasink_handson/GM_mask.nii.gz
# Unzip the mask and delete the GM_PM.nii file
gunzip /output/datasink_handson/GM_mask.nii.gz
rm GM_PM.nii.gz
###Output
_____no_output_____
###Markdown
Let's take a look at this mask:
###Code
import nibabel as nb
mask = nb.load('/output/datasink_handson/GM_mask.nii')
mask.orthoview()
###Output
_____no_output_____
###Markdown
Now we just need to specify this binary mask as an `explicit_mask_file` for the one sample T-test node.
###Code
onesamplettestdes.inputs.explicit_mask_file = '/output/datasink_handson/GM_mask.nii'
###Output
_____no_output_____
###Markdown
Datainput with `SelectFiles` and `iterables` We will again be using [`SelectFiles`](../../../nipype_tutorial/notebooks/basic_data_input.ipynbSelectFiles) and [`iterables`](../../../nipype_tutorial/notebooks/basic_iteration.ipynb).So, what do we need? Actually, just the 1st-level contrasts of all subjects, separated by contrast number.
###Code
# Import the SelectFiles
from nipype import SelectFiles
# String template with {}-based strings
templates = {'cons': '/output/datasink_handson/normalized/sub-*/w*_{cont_id}.nii'}
# Create SelectFiles node
sf = Node(SelectFiles(templates, sort_filelist=True),
name='selectfiles')
###Output
_____no_output_____
###Markdown
We are using `*` to tell `SelectFiles` that it can grab all available subjects and any contrast, with a specific contrast id, independnet if it's an t-contrast (`con`) or an F-contrast (`ess`) contrast.So, let's specify over which contrast the workflow should iterate.
###Code
# list of contrast identifiers
contrast_id_list = ['0001', '0002', '0003', '0004', '0005',
'0006', '0007', '0008', '0009']
sf.iterables = [('cont_id', contrast_id_list)]
###Output
_____no_output_____
###Markdown
Now we need to connect the `SelectFiles` to the `OneSampleTTestDesign` node.
###Code
analysis2nd.connect([(sf, onesamplettestdes, [('cons', 'in_files')])])
###Output
_____no_output_____
###Markdown
Data output with `DataSink`Now, before we run the workflow, let's again specify a `Datasink` folder to only keep those files that we want to keep.
###Code
from nipype.interfaces.io import DataSink
# Initiate DataSink node here
# Initiate the datasink node
output_folder = 'datasink_handson'
datasink = Node(DataSink(base_directory='/output/',
container=output_folder),
name="datasink")
## Use the following substitutions for the DataSink output
substitutions = [('_cont_id_', 'con_')]
datasink.inputs.substitutions = substitutions
###Output
_____no_output_____
###Markdown
Now the next step is to specify all the output that we want to keep in our output folder `output`. Probably best to keep are the:- the SPM.mat file and the spmT images from the `EstimateContrast` node- the thresholded spmT images from the `Threshold` node
###Code
# Connect nodes to datasink here
analysis2nd.connect([(level2conestimate, datasink, [('spm_mat_file',
'2ndLevel.@spm_mat'),
('spmT_images',
'2ndLevel.@T'),
('con_images',
'2ndLevel.@con')]),
(level2thresh, datasink, [('thresholded_map',
'2ndLevel.@threshold')])
])
###Output
_____no_output_____
###Markdown
Visualize the workflowAnd we're good to go. Let's first take a look at the workflow.
###Code
# Create 1st-level analysis output graph
analysis2nd.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename='/output/work_2nd/graph.png')
###Output
_____no_output_____
###Markdown
Run the WorkflowNow that everything is ready, we can run the 2nd-level analysis workflow. Change ``n_procs`` to the number of jobs/cores you want to use.
###Code
analysis2nd.run('MultiProc', plugin_args={'n_procs': 4})
###Output
_____no_output_____
###Markdown
Visualize resultsLet's take a look at the results. Keep in mind that we only have *`N=6`* subjects and that we set the voxel threshold to a very liberal `p<0.01`. Interpretation of the results should, therefore, be taken with a lot of caution.
###Code
from nilearn.plotting import plot_glass_brain
%matplotlib inline
out_path = '/output/datasink_handson/2ndLevel/'
plot_glass_brain(out_path + 'con_0001/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='average (FDR corrected)');
plot_glass_brain(out_path + 'con_0002/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Finger (FDR corrected)');
plot_glass_brain(out_path + 'con_0003/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Foot (FDR corrected)');
plot_glass_brain(out_path + 'con_0004/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Lips (FDR corrected)');
plot_glass_brain(out_path + 'con_0005/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Finger < others (FDR corrected)');
plot_glass_brain(out_path + 'con_0006/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Foot < others (FDR corrected)');
plot_glass_brain(out_path + 'con_0007/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Lips > others (FDR corrected)');
###Output
_____no_output_____
###Markdown
Hands-on 2: How to create a fMRI analysis workflowThe purpose of this section is that you setup a complete fMRI analysis workflow yourself. So that in the end you are able to perform the analysis from A-Z, i.e. from preprocessing to group analysis. This section will cover the analysis part, the previous section [Hands-on 1: Preprocessing](handson_preprocessing.ipynb) handles the preprocessing part.We will use this opportunity to show you some nice additional interfaces/nodes that might not be relevant to your usual analysis. But it's always nice to know that they exist. And hopefully this will encourage you to investigate all other interfaces that Nipype can bring to the tip of your finger.Important: You will not be able to go through this notebook if you haven't preprocessed your subjects first. 1st-level Analysis Workflow StructureIn this notebook we will create a workflow that performs 1st-level analysis and normalizes the resulting beta weights to the MNI template. In concrete steps this means: 1. Specify 1st-level model parameters 2. Specify 1st-level contrasts 3. Estimate 1st-level contrasts 4. Normalize 1st-level contrasts ImportsIt's always best to have all relevant module imports at the beginning of your script. So let's import what we most certainly need.
###Code
from nilearn import plotting
%matplotlib inline
# Get the Node and Workflow object
from nipype import Node, Workflow
# Specify which SPM to use
from nipype.interfaces.matlab import MatlabCommand
MatlabCommand.set_default_paths('/opt/spm12-dev/spm12_mcr/spm/spm12')
###Output
_____no_output_____
###Markdown
**Note:** Ideally you would also put the imports of all the interfaces that you use here at the top. But as we will develop the workflow step by step, we can also import the relevant modules as we go. Create Nodes and Workflow connectionsLet's create all the nodes that we need! Make sure to specify all relevant inputs and keep in mind which ones you later on need to connect in your pipeline. Workflow for the 1st-level analysisWe recommend to create the workflow and establish all it's connections at a later place in your script. This helps to have everything nicely together. But for this hands-on example it makes sense to establish the connections between the nodes as we go.And for this, we first need to create a workflow:
###Code
# Create the workflow here
# Hint: use 'base_dir' to specify where to store the working directory
analysis1st = Workflow(name='work_1st', base_dir='/output/')
###Output
_____no_output_____
###Markdown
Specify 1st-level model parameters (stimuli onsets, duration, etc.) The specify the 1st-level model we need the subject specific onset times and durations of the stimuli. Luckily, as we are working with a BIDS dataset, this information is nicely stored in a `tsv` file:
###Code
import pandas as pd
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
trialinfo
###Output
_____no_output_____
###Markdown
Using pandas is probably the quickest and easiest ways to aggregate stimuli information per condition.
###Code
for group in trialinfo.groupby('trial_type'):
print(group)
print("")
###Output
_____no_output_____
###Markdown
To create a GLM model, Nipype needs an list of `Bunch` objects per session. As we only have one session, our object needs to look as follows: [Bunch(conditions=['Finger', 'Foot', 'Lips'], durations=[[15.0, 15.0, 15.0, 15.0, 15.0], [15.0, 15.0, 15.0, 15.0, 15.0], [15.0, 15.0, 15.0, 15.0, 15.0]], onsets=[[10, 100, 190, 280, 370], [40, 130, 220, 310, 400], [70, 160, 250, 340, 430]] )]For more information see either the [official documnetation](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.algorithms.modelgen.html) or the [nipype_tutorial example](https://miykael.github.io/nipype_tutorial/notebooks/example_1stlevel.htmlSpecify-GLM-Model).So, let's create this Bunch object that we then can use for the GLM model.
###Code
import pandas as pd
from nipype.interfaces.base import Bunch
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
conditions = []
onsets = []
durations = []
for group in trialinfo.groupby('trial_type'):
conditions.append(group[0])
onsets.append(list(group[1].onset -10)) # subtracting 10s due to removing of 4 dummy scans
durations.append(group[1].duration.tolist())
subject_info = [Bunch(conditions=conditions,
onsets=onsets,
durations=durations,
)]
subject_info
###Output
_____no_output_____
###Markdown
Good! Now we can create the node that will create the SPM model. For this we will be using `SpecifySPMModel`. As a reminder the TR of the acquisition is 2.5s and we want to use a high pass filter of 128.
###Code
from nipype.algorithms.modelgen import SpecifySPMModel
# Initiate the SpecifySPMModel node here
modelspec = Node(SpecifySPMModel(concatenate_runs=False,
input_units='secs',
output_units='secs',
time_repetition=2.5,
high_pass_filter_cutoff=128,
subject_info=subject_info),
name="modelspec")
###Output
_____no_output_____
###Markdown
This node will also need some additional inputs, such as the preprocessed functional images, the motion parameters etc. We will specify those once we take care of the workflow data input stream. Specify 1st-level contrastsTo do any GLM analysis, we need to also define the contrasts that we want to investigate. If we recap, we had three different conditions in the **fingerfootlips** task in this dataset:- **finger**- **foot**- **lips**Therefore, we could create the following contrasts (seven T-contrasts and two F-contrasts):
###Code
# Condition names
condition_names = ['Finger', 'Foot', 'Lips']
# Contrasts
cont01 = ['average', 'T', condition_names, [1/3., 1/3., 1/3.]]
cont02 = ['Finger', 'T', condition_names, [1, 0, 0]]
cont03 = ['Foot', 'T', condition_names, [0, 1, 0]]
cont04 = ['Lips', 'T', condition_names, [0, 0, 1]]
cont05 = ['Finger < others','T', condition_names, [-1, 0.5, 0.5]]
cont06 = ['Foot < others', 'T', condition_names, [0.5, -1, 0.5]]
cont07 = ['Lips > others', 'T', condition_names, [-0.5, -0.5, 1]]
cont08 = ['activation', 'F', [cont02, cont03, cont04]]
cont09 = ['differences', 'F', [cont05, cont06, cont07]]
contrast_list = [cont01, cont02, cont03, cont04, cont05, cont06, cont07, cont08, cont09]
###Output
_____no_output_____
###Markdown
Estimate 1st-level contrastsBefore we can estimate the 1st-level contrasts, we first need to create the 1st-level design. Here you can also specify what kind of basis function you want (HRF, FIR, Fourier, etc.), if you want to use time and dispersion derivatives and how you want to model the serial correlation.In this example I propose that you use an HRF basis function, that we model time derivatives and that we model the serial correlation with AR(1).
###Code
from nipype.interfaces.spm import Level1Design
# Initiate the Level1Design node here
level1design = Node(Level1Design(bases={'hrf': {'derivs': [1, 0]}},
timing_units='secs',
interscan_interval=2.5,
model_serial_correlations='AR(1)'),
name="level1design")
###Output
_____no_output_____
###Markdown
Now that we have the Model Specification and 1st-Level Design node, we can connect them to each other:
###Code
# Connect the two nodes here
analysis1st.connect([(modelspec, level1design, [('session_info',
'session_info')])])
###Output
_____no_output_____
###Markdown
Now we need to estimate the model. I recommend that you'll use a `Classical: 1` method to estimate the model.
###Code
from nipype.interfaces.spm import EstimateModel
# Initiate the EstimateModel node here
level1estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
name="level1estimate")
###Output
_____no_output_____
###Markdown
Now we can connect the 1st-Level Design node with the model estimation node.
###Code
# Connect the two nodes here
analysis1st.connect([(level1design, level1estimate, [('spm_mat_file',
'spm_mat_file')])])
###Output
_____no_output_____
###Markdown
Now that we estimate the model, we can estimate the contrasts. Don't forget to feed the list of contrast we specify above to this node.
###Code
from nipype.interfaces.spm import EstimateContrast
# Initiate the EstimateContrast node here
level1conest = Node(EstimateContrast(contrasts=contrast_list),
name="level1conest")
###Output
_____no_output_____
###Markdown
Now we can connect the model estimation node with the contrast estimation node.
###Code
# Connect the two nodes here
analysis1st.connect([(level1estimate, level1conest, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')])])
###Output
_____no_output_____
###Markdown
Normalize 1st-level contrastsNow that the contrasts were estimated in subject space we can put them into a common reference space by normalizing them to a specific template. In this case we will be using SPM12's Normalize routine and normalize to the SPM12 tissue probability map `TPM.nii`.At this step you can also specify the voxel resolution of the output volumes. If you don't specify it, it will normalize to a voxel resolution of 2x2x2mm. As a training exercise, set the voxel resolution to 4x4x4mm.
###Code
from nipype.interfaces.spm import Normalize12
# Location of the template
template = '/opt/spm12-dev/spm12_mcr/spm/spm12/tpm/TPM.nii'
# Initiate the Normalize12 node here
normalize = Node(Normalize12(jobtype='estwrite',
tpm=template,
write_voxel_sizes=[4, 4, 4]
),
name="normalize")
###Output
_____no_output_____
###Markdown
Now we can connect the estimated contrasts to normalization node.
###Code
# Connect the nodes here
analysis1st.connect([(level1conest, normalize, [('con_images',
'apply_to_files')])
])
###Output
_____no_output_____
###Markdown
Datainput with `SelectFiles` and `iterables` As in the preprocessing hands-on, we will again be using [`SelectFiles`](../../../nipype_tutorial/notebooks/basic_data_input.ipynbSelectFiles) and [`iterables`](../../../nipype_tutorial/notebooks/basic_iteration.ipynb). So, what do we need?From the preprocessing pipeline, we need the functional images, the motion parameters and the list of outliers. Also, for the normalization we need the subject specific anatomy.
###Code
# Import the SelectFiles
from nipype import SelectFiles
# String template with {}-based strings
templates = {'anat': '/data/ds000114/sub-{subj_id}/ses-test/anat/sub-{subj_id}_ses-test_T1w.nii.gz',
'func': '/output/datasink_handson/preproc/sub-{subj_id}_detrend.nii.gz',
'mc_param': '/output/datasink_handson/preproc/sub-{subj_id}.par',
'outliers': '/output/datasink_handson/preproc/art.sub-{subj_id}_outliers.txt'
}
# Create SelectFiles node
sf = Node(SelectFiles(templates, sort_filelist=True),
name='selectfiles')
###Output
_____no_output_____
###Markdown
Now we can specify over which subjects the workflow should iterate. As we preprocessed only subjects 1 to 5, we can only them for this analysis.
###Code
# list of subject identifiers
subject_list = ['02', '03', '04', '07', '08', '09']
sf.iterables = [('subj_id', subject_list)]
###Output
_____no_output_____
###Markdown
Gunzip Node SPM12 can accept NIfTI files as input, but online if they are not compressed ('unzipped'). Therefore, we need to use a `Gunzip` node to unzip the detrend file and another one to unzip the anatomy image, before we can feed it to the model specification node.
###Code
from nipype.algorithms.misc import Gunzip
# Initiate the two Gunzip node here
gunzip_anat = Node(Gunzip(), name='gunzip_anat')
gunzip_func = Node(Gunzip(), name='gunzip_func')
###Output
_____no_output_____
###Markdown
And as a final step, we just need to connect this `SelectFiles` node to the rest of the workflow.
###Code
# Connect SelectFiles node to the other nodes here
analysis1st.connect([(sf, gunzip_anat, [('anat', 'in_file')]),
(sf, gunzip_func, [('func', 'in_file')]),
(gunzip_anat, normalize, [('out_file', 'image_to_align')]),
(gunzip_func, modelspec, [('out_file', 'functional_runs')]),
(sf, modelspec, [('mc_param', 'realignment_parameters'),
('outliers', 'outlier_files'),
])
])
###Output
_____no_output_____
###Markdown
Data output with `DataSink`Now, before we run the workflow, let's again specify a `Datasink` folder to only keep those files that we want to keep.
###Code
from nipype.interfaces.io import DataSink
# Initiate DataSink node here
# Initiate the datasink node
output_folder = 'datasink_handson'
datasink = Node(DataSink(base_directory='/output/',
container=output_folder),
name="datasink")
## Use the following substitutions for the DataSink output
substitutions = [('_subj_id_', 'sub-')]
datasink.inputs.substitutions = substitutions
###Output
_____no_output_____
###Markdown
Now the next step is to specify all the output that we want to keep in our output folder `output`. Probably best to keep are the:- SPM.mat file and the spmT and spmF files from the contrast estimation node- normalized betas and anatomy
###Code
# Connect nodes to datasink here
analysis1st.connect([(level1conest, datasink, [('spm_mat_file', '1stLevel.@spm_mat'),
('spmT_images', '1stLevel.@T'),
('spmF_images', '1stLevel.@F'),
]),
(normalize, datasink, [('normalized_files', 'normalized.@files'),
('normalized_image', 'normalized.@image'),
]),
])
###Output
_____no_output_____
###Markdown
Visualize the workflowNow that the workflow is finished, let's visualize it again.
###Code
# Create 1st-level analysis output graph
analysis1st.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename='/output/work_1st/graph.png')
###Output
_____no_output_____
###Markdown
Run the WorkflowNow that everything is ready, we can run the 1st-level analysis workflow. Change ``n_procs`` to the number of jobs/cores you want to use.
###Code
analysis1st.run('MultiProc', plugin_args={'n_procs': 4})
###Output
_____no_output_____
###Markdown
Visualize results
###Code
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
First, let's look at the 1st-level Design Matrix of subject one, to verify that everything is as it should be.
###Code
from scipy.io import loadmat
# Using scipy's loadmat function we can access SPM.mat
spmmat = loadmat('/output/datasink_handson/1stLevel/sub-07/SPM.mat',
struct_as_record=False)
###Output
_____no_output_____
###Markdown
The design matrix and the names of the regressors are a bit hidden in the `spmmat` variable, but they can be accessed as follows:
###Code
designMatrix = spmmat['SPM'][0][0].xX[0][0].X
names = [i[0] for i in spmmat['SPM'][0][0].xX[0][0].name[0]]
###Output
_____no_output_____
###Markdown
Now before we can plot it, we just need to normalize the desing matrix in such a way, that each column has a maximum amplitude of 1. This is just for visualization purposes, otherwise the rotation parameters with their rather small values will not show up in the figure.
###Code
normed_design = designMatrix / np.abs(designMatrix).max(axis=0)
###Output
_____no_output_____
###Markdown
And we're ready to plot the design matrix.
###Code
fig, ax = plt.subplots(figsize=(8, 8))
plt.imshow(normed_design, aspect='auto', cmap='gray', interpolation='none')
ax.set_ylabel('Volume id')
ax.set_xticks(np.arange(len(names)))
ax.set_xticklabels(names, rotation=90);
###Output
_____no_output_____
###Markdown
Now that we're happy with the design matrix, let's look how well the normalization worked.
###Code
import nibabel as nb
from nilearn.plotting import plot_anat
from nilearn.plotting import plot_glass_brain
# Load GM probability map of TPM.nii
img = nb.load('/opt/spm12-dev/spm12_mcr/spm/spm12/tpm/TPM.nii')
GM_template = nb.Nifti1Image(img.get_data()[..., 0], img.affine, img.header)
# Plot normalized subject anatomy
display = plot_anat('/output/datasink_handson/normalized/sub-07/wsub-07_ses-test_T1w.nii',
dim=-0.1)
# Overlay in edges GM map
display.add_edges(GM_template)
###Output
_____no_output_____
###Markdown
Let's look at the contrasts of one subject that we've just computed. In particular the F-contrast.
###Code
plot_glass_brain('/output/datasink_handson/normalized/sub-07/wess_0008.nii',
colorbar=True, display_mode='lyrz', black_bg=True, threshold=25,
title='subject 7 - F-contrast: Activation');
plot_glass_brain('/output/datasink_handson/normalized/sub-07/wess_0009.nii',
colorbar=True, display_mode='lyrz', black_bg=True, threshold=25,
title='subject 7 - F-contrast: Differences');
###Output
_____no_output_____
###Markdown
2nd-level Analysis Workflow StructureLast but not least, the group level analysis. This example will also directly include thresholding of the output, as well as some visualization. ImportsTo make sure that the necessary imports are done, here they are again:
###Code
# Get the Node and Workflow object
from nipype import Node, Workflow
# Specify which SPM to use
from nipype.interfaces.matlab import MatlabCommand
MatlabCommand.set_default_paths('/opt/spm12-dev/spm12_mcr/spm/spm12')
###Output
_____no_output_____
###Markdown
Create Nodes and Workflow connectionsNow we should know this part very well. Workflow for the 2nd-level analysis
###Code
# Create the workflow here
# Hint: use 'base_dir' to specify where to store the working directory
analysis2nd = Workflow(name='work_2nd', base_dir='/output/')
###Output
_____no_output_____
###Markdown
2nd-Level DesignThis step depends on your study design and the tests you want to perform. If you're using SPM to do the group analysis, you have the liberty to choose between a factorial design, a multiple regression design, one sample T-Test design, a paired T-Test design or a two sample T-Test design.For the current example, we will be using a one sample T-Test design.
###Code
from nipype.interfaces.spm import OneSampleTTestDesign
# Initiate the OneSampleTTestDesign node here
onesamplettestdes = Node(OneSampleTTestDesign(), name="onesampttestdes")
###Output
_____no_output_____
###Markdown
The next two steps are the same as for the 1st-level design, i.e. estimation of the model followed by estimation of the contrasts.
###Code
from nipype.interfaces.spm import EstimateModel, EstimateContrast
# Initiate the EstimateModel and the EstimateContrast node here
level2estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
name="level2estimate")
level2conestimate = Node(EstimateContrast(group_contrast=True),
name="level2conestimate")
###Output
_____no_output_____
###Markdown
To finish the `EstimateContrast` node, we also need to specify which contrast should be computed. For a 2nd-level one sample t-test design, this is rather straight forward:
###Code
cont01 = ['Group', 'T', ['mean'], [1]]
level2conestimate.inputs.contrasts = [cont01]
###Output
_____no_output_____
###Markdown
Now, let's connect those three design nodes to each other.
###Code
# Connect OneSampleTTestDesign, EstimateModel and EstimateContrast here
analysis2nd.connect([(onesamplettestdes, level2estimate, [('spm_mat_file',
'spm_mat_file')]),
(level2estimate, level2conestimate, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')])
])
###Output
_____no_output_____
###Markdown
Thresholding of output contrastAnd to close, we will use SPM `Threshold`. With this routine, we can set a specific voxle threshold (i.e. *p*<0.001) and apply an FDR cluster threshold (i.e. *p*<0.05).As we only have 5 subjects, I recommend to set the voxel threshold to 0.01 and to leave the cluster threshold at 0.05.
###Code
from nipype.interfaces.spm import Threshold
level2thresh = Node(Threshold(contrast_index=1,
use_topo_fdr=True,
use_fwe_correction=False,
extent_threshold=0,
height_threshold=0.01,
height_threshold_type='p-value',
extent_fdr_p_threshold=0.05),
name="level2thresh")
# Connect the Threshold node to the EstimateContrast node herer
analysis2nd.connect([(level2conestimate, level2thresh, [('spm_mat_file',
'spm_mat_file'),
('spmT_images',
'stat_image'),
])
])
###Output
_____no_output_____
###Markdown
Gray Matter MaskWe could run our 2nd-level workflow as it is. All the major nodes are there. But I nonetheless suggest that we use a gray matter mask to restrict the analysis to only gray matter voxels.In the 1st-level analysis, we normalized to SPM12's `TPM.nii` tissue probability atlas. Therefore, we could just take the gray matter probability map of this `TPM.nii` image (the first volume) and threshold it at a certain probability value to get a binary mask. This can of course also all be done in Nipype, but sometimes the direct bash code is quicker:
###Code
%%bash
TEMPLATE='/opt/spm12-dev/spm12_mcr/spm/spm12/tpm/TPM.nii'
# Extract the first volume with `fslroi`
fslroi $TEMPLATE GM_PM.nii.gz 0 1
# Threshold the probability mask at 10%
fslmaths GM_PM.nii -thr 0.10 -bin /output/datasink_handson/GM_mask.nii.gz
# Unzip the mask and delete the GM_PM.nii file
gunzip /output/datasink_handson/GM_mask.nii.gz
rm GM_PM.nii.gz
###Output
_____no_output_____
###Markdown
Let's take a look at this mask:
###Code
import nibabel as nb
mask = nb.load('/output/datasink_handson/GM_mask.nii')
mask.orthoview()
###Output
_____no_output_____
###Markdown
Now we just need to specify this binary mask as an `explicit_mask_file` for the one sample T-test node.
###Code
onesamplettestdes.inputs.explicit_mask_file = '/output/datasink_handson/GM_mask.nii'
###Output
_____no_output_____
###Markdown
Datainput with `SelectFiles` and `iterables` We will again be using [`SelectFiles`](../../../nipype_tutorial/notebooks/basic_data_input.ipynbSelectFiles) and [`iterables`](../../../nipype_tutorial/notebooks/basic_iteration.ipynb).So, what do we need? Actually, just the 1st-level contrasts of all subjects, separated by contrast number.
###Code
# Import the SelectFiles
from nipype import SelectFiles
# String template with {}-based strings
templates = {'cons': '/output/datasink_handson/normalized/sub-*/w*_{cont_id}.nii'}
# Create SelectFiles node
sf = Node(SelectFiles(templates, sort_filelist=True),
name='selectfiles')
###Output
_____no_output_____
###Markdown
We are using `*` to tell `SelectFiles` that it can grab all available subjects and any contrast, with a specific contrast id, independnet if it's an t-contrast (`con`) or an F-contrast (`ess`) contrast.So, let's specify over which contrast the workflow should iterate.
###Code
# list of contrast identifiers
contrast_id_list = ['0001', '0002', '0003', '0004', '0005',
'0006', '0007', '0008', '0009']
sf.iterables = [('cont_id', contrast_id_list)]
###Output
_____no_output_____
###Markdown
Now we need to connect the `SelectFiles` to the `OneSampleTTestDesign` node.
###Code
analysis2nd.connect([(sf, onesamplettestdes, [('cons', 'in_files')])])
###Output
_____no_output_____
###Markdown
Data output with `DataSink`Now, before we run the workflow, let's again specify a `Datasink` folder to only keep those files that we want to keep.
###Code
from nipype.interfaces.io import DataSink
# Initiate DataSink node here
# Initiate the datasink node
output_folder = 'datasink_handson'
datasink = Node(DataSink(base_directory='/output/',
container=output_folder),
name="datasink")
## Use the following substitutions for the DataSink output
substitutions = [('_cont_id_', 'con_')]
datasink.inputs.substitutions = substitutions
###Output
_____no_output_____
###Markdown
Now the next step is to specify all the output that we want to keep in our output folder `output`. Probably best to keep are the:- the SPM.mat file and the spmT images from the `EstimateContrast` node- the thresholded spmT images from the `Threshold` node
###Code
# Connect nodes to datasink here
analysis2nd.connect([(level2conestimate, datasink, [('spm_mat_file',
'2ndLevel.@spm_mat'),
('spmT_images',
'2ndLevel.@T'),
('con_images',
'2ndLevel.@con')]),
(level2thresh, datasink, [('thresholded_map',
'2ndLevel.@threshold')])
])
###Output
_____no_output_____
###Markdown
Visualize the workflowAnd we're good to go. Let's first take a look at the workflow.
###Code
# Create 1st-level analysis output graph
analysis2nd.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename='/output/work_2nd/graph.png')
###Output
_____no_output_____
###Markdown
Run the WorkflowNow that everything is ready, we can run the 2nd-level analysis workflow. Change ``n_procs`` to the number of jobs/cores you want to use.
###Code
analysis2nd.run('MultiProc', plugin_args={'n_procs': 4})
###Output
_____no_output_____
###Markdown
Visualize resultsLet's take a look at the results. Keep in mind that we only have *`N=6`* subjects and that we set the voxel threshold to a very liberal `p<0.01`. Interpretation of the results should therefore be taken with a lot of caution.
###Code
from nilearn.plotting import plot_glass_brain
%matplotlib inline
out_path = '/output/datasink_handson/2ndLevel/'
plot_glass_brain(out_path + 'con_0001/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='average (FDR corrected)');
plot_glass_brain(out_path + 'con_0002/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Finger (FDR corrected)');
plot_glass_brain(out_path + 'con_0003/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Foot (FDR corrected)');
plot_glass_brain(out_path + 'con_0004/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Lips (FDR corrected)');
plot_glass_brain(out_path + 'con_0005/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Finger < others (FDR corrected)');
plot_glass_brain(out_path + 'con_0006/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Foot < others (FDR corrected)');
plot_glass_brain(out_path + 'con_0007/spmT_0001_thr.nii', display_mode='lyrz',
black_bg=True, colorbar=True, title='Lips > others (FDR corrected)');
###Output
_____no_output_____ |
LabExercise_1/DC_Cylinder_2D.ipynb | ###Markdown
Purpose For a direct current resistivity (DCR) survey, currents are injected to the earth, and flow. Depending upon the conductivity contrast current flow in the earth will be distorted, and these changes can be measurable on the sufurface electrodes. Here, we focus on a cylinder target embedded in a halfspace, and investigate what are happening in the earth when static currents are injected. Different from a sphere case, which is a finite target, "coupling" among Tx, target (conductor or resistor), and Rx will be significanlty different upon various scenarios. By investigating changes in currents, electric fields, potential, and charges upon different geometry of cylinder and survey, Tx and Rx location, we understand geometric effects of the target for DCR survey. Setup Question - Is the potential difference measured by a dipole over a conductive (/resisitive) target higher or lower compared to the half-space reference?- how do the field lines bend in presence of a conductive (/resistive) target?- Compared to the positive and negative sources (A and B), how are oriented the positive and negative accumulated charges around a conductive (/resistive) target?- How would you describe the secondary fields pattern? Does it remind you of the response of an object fundamental to electromagnetics? Cylinder app Parameters: - **survey**: Type of survey - **A**: (+) Current electrode location - **B**: (-) Current electrode location - **M**: (+) Potential electrode location - **N**: (-) Potential electrode location - **r**: radius of cylinder - **xc**: x location of cylinder center - **zc**: z location of cylinder center - **$\rho_1$**: Resistivity of the halfspace - **$\rho_2$**: Resistivity of the cylinder - **Field**: Field to visualize - **Type**: which part of the field - **Scale**: Linear or Log Scale visualization
###Code
app = cylinder_app()
display(app)
###Output
_____no_output_____ |
ScikitLearnEmulators/LogisticRegression/RealWorldDataset.ipynb | ###Markdown
The `Heart.csv` data set can be downloaded from: http://www-bcf.usc.edu/~gareth/ISL/data.html
###Code
# Load in the data set
heart_data = pd.read_csv('Heart.csv', sep =',')
heart_data = heart_data.dropna()
# Turn string categories to indicator variables
chest_pain_categories = ['typical', 'asymptomatic', 'nonanginal', 'nontypical']
heart_data.ChestPain = heart_data.ChestPain.astype("category", categories=chest_pain_categories).cat.codes
thal_categories = ['fixed', 'normal', 'reversable']
heart_data.Thal = heart_data.Thal.astype("category", categories=thal_categories).cat.codes
heart_data.head(5)
# Create X matrix with predictors and y vector with response
X = heart_data.drop('AHD', axis = 1)
X.head(5)
y = heart_data.iloc[:,14]
y = y.map({"Yes": 1, "No": -1})
y.head(5)
# Convert to numpy arrays
X = X.values
y = y.values
x_test, y_test, x_train, y_train = myLogisticRegression.split_data_set(X, y)
x_train, x_test = myLogisticRegression.standardize_data(x_train, x_test)
lambda_ = 1
my_model = myLogisticRegression.LogisticRegression(lambda_)
my_model.fit(x_train, y_train)
my_model.coef_
my_model.score(x_train, y_train)
my_model.score(x_test, y_test)
my_modelCV = myLogisticRegression.LogisticRegressionCV()
my_modelCV.fit(x_train, y_train)
my_modelCV.score(x_train, y_train)
my_modelCV.score(x_test, y_test)
my_modelCV.lambda_
###Output
_____no_output_____ |
01-Principles/solutions/01 Python Basics-solution.ipynb | ###Markdown
1. Basics to PythonPython is a very simple language, and has a very straightforward syntax. It encourages programmers to program without boilerplate (prepared) code. The simplest directive in Python is the "print" directive - it simply prints out a line (and also includes a newline, unlike in C).There are two major Python versions, Python 2 and Python 3. Python 2 and 3 are quite different. This tutorial uses Python 3, because it more semantically correct and supports newer features. The Logic of Python- *Beautiful is better than ugly.*- *Explicit is better than implicit.*- *Simple is better than complex.*- *Complex is better than complicated.*- *Flat is better than nested.*- *Sparse is better than dense.*- *Readability counts.*- *Special cases aren't special enough to break the rules.*- *Although practicality beats purity.*- *Errors should never pass silently.*- *Unless explicitly silenced.*- *In the face of ambiguity, refuse the temptation to guess.*- *There should be one -- and preferably only one -- obvious way to do it.*- *Although that way may not be obvious at first unless you're Dutch.*- *Now is better than never.*- *Although never is often better than *right* now.*- *If the implementation is hard to explain, it's a bad idea.*- *If the implementation is easy to explain, it may be a good idea.*- *Namespaces are one honking great idea -- let's do more of those!*For example, one difference between Python 2 and 3 is the print statement. In Python 2, the `print` statement is not a function, and therefore it is invoked without parentheses. However, in Python 3, it is a function, and must be invoked with parentheses.To print a string in Python 3, just write:
###Code
print("This line will be printed.")
###Output
_____no_output_____
###Markdown
IndentationPython uses indentation for blocks, instead of curly braces. Both tabs and spaces are supported, but the standard indentation requires standard Python code to use four spaces. For example:
###Code
x = 1
if x == 1:
# indented four spaces
print("x is 1.")
###Output
_____no_output_____
###Markdown
Variables and TypesPython is completely object oriented, and not "statically typed". You do not need to declare variables before using them, or declare their type. Every variable in Python is an object.This tutorial will go over a few basic types of variables.
###Code
# integers
myint = 7
print(myint)
# strings - either double or single quotes!
mystring = "Hello"
print(mystring)
mystring = 'Hello'
print(mystring)
###Output
_____no_output_____
###Markdown
The difference between the two is that using double quotes makes it easy to include apostrophes (whereas these would terminate the string if using single quotes). There are additional variations on defining strings that make it easier to include things such as carriage returns, backslashes and Unicode characters.We can use single operators on numbers and strings, such as concatenation:
###Code
one = 1
two = 2
three = one + two
print(three)
hello = "hello"
world = "world"
print(hello + " " + world)
###Output
_____no_output_____
###Markdown
Assignments can be done on more than one variable simultaenously on the same line:
###Code
a, b = 3, 4
print(a,b)
###Output
_____no_output_____
###Markdown
Mixing operators between numbers and strings is not supported:
###Code
one = 1
two = 2
hello = "hello"
print(one + two + hello)
###Output
_____no_output_____
###Markdown
TuplesA tuple is an immutable list, i.e. a tuple cannot be changed in any way once it has been created. A tuple is defined analogously to lists, except that the set of elements is enclosed in parentheses instead of square brackets. The rules for indices are the same as for lists. Once a tuple has been created, you can't add elements to a tuple or remove elements from a tuple. Where is the benefit of tuples?- Tuples are faster than lists.- If you know that some data doesn't have to be changed, you should use tuples instead of lists, because this protects your data against accidental changes.- The main advantage of tuples consists in the fact that tuples can be used as keys in dictionaries, while lists can't.The following example shows how to define a tuple and how to access a tuple. Furthermore we can see that we raise an error, if we try to assign a new value to an element of a tuple:
###Code
t = ("tuples", "are", "immutable")
t[0]
t[0] = "assignments to elements cannot happen!"
###Output
_____no_output_____
###Markdown
ListsLists are very similar to arrays. They can contain any type of variable, and they can contain as many variables as you wish. Lists can also be iterated over in a very simple manner. Here is an example of how to build a list.
###Code
mylist = []
mylist.append(1)
mylist.append(2)
mylist.append(3)
print(mylist[0]) # prints 1
print(mylist[1]) # prints 2
print(mylist[2]) # prints 3
# prints out 1,2,3
for x in mylist:
print(x)
###Output
_____no_output_____
###Markdown
Accessing an index that does not exist generates an error:
###Code
mylist = [1,2,3]
print(mylist[10])
###Output
_____no_output_____
###Markdown
There are a number of common operations with lists, including `append` and `pop`:
###Code
mylist = [1,2,3,4]
mylist.append(5)
mylist
# changes the state of the object
mylist.pop()
mylist
###Output
_____no_output_____
###Markdown
Basic OperatorsThis section explains how to use basic operators in Python.Python has a large number of built-in operators, which can be applied to all numerical types:- **+, -**: Addition, Subtraction- **\*, %**: Multiplication, Modulo- **/**: Division (NOTE in Python 2 this does a *floor division* to floats!)- **//**: Truncated Division, (floor division Python 3+)- **+x, -x**: Unary minus and unary plus- **~x**: Bit-wise negation (NOT)- **\****: Exponentiation (powers)- **or, and, not**: Boolean or, Boolean and, Boolean not- **in**: Element of- **, >=, ==, !=**: Comparison operators- **|, &, ^**: Bitwise or, bitwise and, bitwise not- **>**: Shift operatorsJust as any other programming languages, the addition, subtraction, multiplication, and division operators can be used with numbers.
###Code
number = 1 + 2 * 3 / 4.0
print(number)
###Output
_____no_output_____
###Markdown
Another operator available is the modulo (%) operator, which returns the integer remainder of the division. dividend % divisor = remainder.
###Code
remainder = 11 % 3
print(remainder)
###Output
_____no_output_____
###Markdown
Using two multiplication symbols makes a power relationship.
###Code
squared = 7 ** 2
cubed = 2 ** 3
###Output
_____no_output_____
###Markdown
Python supports concatenating strings using the addition operator:
###Code
helloworld = "hello" + " " + "world"
print(helloworld)
###Output
_____no_output_____
###Markdown
Python also supports multiplying strings to form a string with a repeating sequence:
###Code
lotsofhellos = "hello" * 10
print(lotsofhellos)
###Output
_____no_output_____
###Markdown
Using Operators with ListsLists can be joined with the addition operators:
###Code
even_numbers = [2,4,6,8]
odd_numbers = [1,3,5,7]
all_numbers = odd_numbers + even_numbers
print(all_numbers)
###Output
_____no_output_____
###Markdown
Just as in strings, Python supports forming new lists with a repeating sequence using the multiplication operator:
###Code
print([1,2,3] * 3)
###Output
_____no_output_____
###Markdown
String FormattingPython uses C-style string formatting to create new, formatted strings. The "%" operator is used to format a set of variables enclosed in a "tuple" (a fixed size list), together with a format string, which contains normal text together with "argument specifiers", special symbols like "%s" and "%d".Let's say you have a variable called "name" with your user name in it, and you would then like to print(out a greeting to that user.)
###Code
name = "John"
print("Hello, %s!" % name)
###Output
_____no_output_____
###Markdown
To use two or more argument specifiers, use a tuple (parentheses):
###Code
name = "John"
age = 23
print("%s is %d years old." % (name, age))
###Output
_____no_output_____
###Markdown
Any object which is not a string can be formatted using the %s operator as well. The string which returns from the "repr" method of that object is formatted as the string. For example:
###Code
mylist = [1,2,3]
print("A list: %s" % mylist)
###Output
_____no_output_____
###Markdown
Here are some of the basic argument specifiers you should know:- %s : string (or any object with string representation, like numbers)- %d : integers- %f : floating point numbers- %.f : floating point number with fixed amount of digits to the right of the dot- %x : integers in hex representation String OperationsStrings are bits of text. They can be defined as anything between quotes:
###Code
astring = "Hello world!"
astring2 = 'Hello world!'
###Output
_____no_output_____
###Markdown
As you can see, the first thing you learned was printing a simple sentence. This sentence was stored by Python as a string. However, instead of immediately printing strings out, we will explore the various things you can do to them. You can also use single quotes to assign a string. However, you will face problems if the value to be assigned itself contains single quotes.For example to assign the string in these bracket(single quotes are ' ') you need to use double quotes only like this:
###Code
astring = "Hello world!"
print("single quotes are ' '")
print(len(astring))
###Output
_____no_output_____
###Markdown
That prints out 12, because "Hello world!" is 12 characters long, including punctuation and spaces.
###Code
astring = "Hello world!"
print(astring.count("l"))
###Output
_____no_output_____
###Markdown
Because there are three instances of the letter 'l' within 'Hello world!'.
###Code
astring = "Hello world!"
print(astring[3:7])
###Output
_____no_output_____
###Markdown
This prints a slice of the string, starting at index 3, and ending at index 6. But why 6 and not 7? Again, most programming languages do this - it makes doing math inside those brackets easier.If you just have one number in the brackets, it will give you the single character at that index. If you leave out the first number but keep the colon, it will give you a slice from the start to the number you left in. If you leave out the second number, if will give you a slice from the first number to the end.You can even put negative numbers inside the brackets. They are an easy way of starting at the end of the string instead of the beginning. This way, -3 means "3rd character from the end".
###Code
astring = "Hello world!"
print(astring[3:7:2])
###Output
_____no_output_____
###Markdown
This prints the characters of string from 3 to 7 skipping one character. This is extended slice syntax. The general form is [start:stop:step].
###Code
astring = "Hello world!"
print(astring[3:7])
print(astring[3:7:1])
###Output
_____no_output_____
###Markdown
There is no function like strrev in C to reverse a string. But with the above mentioned type of slice syntax you can easily reverse a string like this:
###Code
astring = "Hello world!"
print(astring[::-1])
###Output
_____no_output_____
###Markdown
We can trivially make all letters upper or lowercase as needed:
###Code
astring = "Hello world!"
print(astring.upper())
print(astring.lower())
###Output
_____no_output_____
###Markdown
This splits the string into a bunch of strings grouped together in a list. Since this example splits at a space, the first item in the list will be "Hello", and the second will be "world!".
###Code
astring = "Hello world!"
afewwords = astring.split(" ")
afewwords
###Output
_____no_output_____
###Markdown
ConditionsPython uses boolean variables to evaluate conditions. The boolean values True and False are returned when an expression is compared or evaluated. For example:
###Code
x = 2
print(x == 2) # prints out True
print(x == 3) # prints out False
print(x < 3) # prints out True
###Output
_____no_output_____
###Markdown
Notice that variable assignment is done using a single equals operator "=", whereas comparison between two variables is done using the double equals operator "==". The "not equals" operator is marked as "!=". Boolean operatorsThe "and" and "or" boolean operators allow building complex boolean expressions, for example:
###Code
name = "John"
age = 23
if name == "John" and age == 23:
print("Your name is John, and you are also 23 years old.")
if name == "John" or name == "Rick":
print("Your name is either John or Rick.")
###Output
_____no_output_____
###Markdown
The "in" operatorThe "in" operator could be used to check if a specified object exists within an iterable object container, such as a list:
###Code
name = "John"
if name in ["John", "Rick"]:
print("Your name is either John or Rick.")
###Output
_____no_output_____
###Markdown
Python uses indentation to define code blocks, instead of brackets. The standard Python indentation is 4 spaces, although tabs and any other space size will work, as long as it is consistent. Notice that code blocks do not need any termination.Here is an example for using Python's "if" statement using code blocks: if : .... .... elif : else if .... .... else: .... .... The 'is' operatorUnlike the double equals operator "==", the "is" operator does not match the values of the variables, but the instances themselves. For example:
###Code
x = [1,2,3]
y = [1,2,3]
print(x == y) # Prints out True
print(x is y) # Prints out False
###Output
_____no_output_____
###Markdown
The "not" operatorUsing "not" before a boolean expression inverts it:
###Code
print(not False) # Prints out True
print((not False) == (False)) # Prints out False
###Output
_____no_output_____
###Markdown
LoopsThere are two types of loops in Python, for and while.
###Code
primes = [2, 3, 5, 7]
for prime in primes:
print(prime)
###Output
_____no_output_____
###Markdown
For loops can iterate over a sequence of numbers using the "range" and "xrange" functions. The difference between range and xrange is that the range function returns a new list with numbers of that specified range, whereas xrange returns an iterator, which is more efficient. (Python 3 uses the range function, which acts like xrange). Note that the range function is zero based.While loops repeat as long as a certain boolean condition is met. For example:
###Code
count = 0
while count < 5:
print(count)
count += 1 # This is the same as count = count + 1
###Output
_____no_output_____
###Markdown
'break' and 'continue' statements`break` is used to exit a for loop or a while loop, whereas `continue` is used to skip the current block, and return to the "for" or "while" statement. A few examples:
###Code
count = 0
while True:
print(count)
count += 1
if count >= 5:
break
# Prints out only odd numbers - 1,3,5,7,9
for x in range(10):
# Check if x is even
if x % 2 == 0:
continue
print(x)
###Output
_____no_output_____
###Markdown
unlike languages like C,CPP.. we can use else for loops. When the loop condition of "for" or "while" statement fails then code part in "else" is executed. If break statement is executed inside for loop then the "else" part is skipped. Note that "else" part is executed even if there is a continue statement.Here are a few examples:
###Code
count=0
while(count<5):
print(count)
count +=1
else:
print("count value reached %d" %(count))
# Prints out 1,2,3,4
for i in range(1, 10):
if(i%5==0):
break
print(i)
else:
print("this is not printed because for loop is terminated because of break but not due to fail in condition")
###Output
_____no_output_____
###Markdown
FunctionsFunctions are a convenient way to divide your code into useful blocks, allowing us to order our code, make it more readable, reuse it and save some time. Also functions are a key way to define interfaces so programmers can share their code.As we have seen on previous tutorials, Python makes use of blocks.A block is a area of code of written in the format of: block_head: 1st block line 2nd block line ...Where a block line is more Python code (even another block), and the block head is of the following format: block_keyword block_name(argument1,argument2, ...) Block keywords you already know are "if", "for", and "while".Functions in python are defined using the block keyword "def", followed with the function's name as the block's name. For example:
###Code
def my_function():
print("Hello From My Function!")
my_function()
###Output
_____no_output_____
###Markdown
Functions may also receive arguments (variables passed from the caller to the function). For example:
###Code
def my_function_with_args(username, greeting):
print("Hello, %s , From My Function!, I wish you %s"%(username, greeting))
my_function_with_args("Greg", "well")
###Output
_____no_output_____
###Markdown
Functions may return a value to the caller, using the keyword- 'return' . For example:
###Code
def sum_two_numbers(a, b):
return a + b
sum_two_numbers(2, 2)
###Output
_____no_output_____
###Markdown
Function arguments can also be **defaulted**, meaning that the default value is used if no argument is passed:
###Code
def sum_three_numbers(a, b=5, c=10, d="Hello"):
return a+b+c
print(sum_three_numbers(2))
print(sum_three_numbers(2, b=3))
###Output
_____no_output_____
###Markdown
Note that arguments that **do not** have defaulted parameters must be before default parameter arguments. In addition, we can reference the name of the parameter we want to pass when calling the function, even if not defaulted:
###Code
sum_three_numbers(a=10, b=2, c=4)
###Output
_____no_output_____
###Markdown
This can help when there are many arguments to the developer rather than relying on their own memory of the default parameters. We can also create functions that **return multiple values** as in a tuple:
###Code
def split_one_into_three(x=10):
# split our number into three smaller ones
return (x%2, x/2, x+2)
split_one_into_three()
###Output
_____no_output_____
###Markdown
These values can be extracted into one tuple, or manually split into new variables:
###Code
a, b, c = split_one_into_three(20)
print("a:{}, b:{}, c:{}".format(a,b,c))
d = split_one_into_three(20)
print("d:{}".format(d))
###Output
_____no_output_____
###Markdown
Local and Global Variables in FunctionsVariable names are by default local to the function, in which they get defined.
###Code
def f():
s = "Python"
print(s)
return None
f()
print(s)
###Output
_____no_output_____
###Markdown
As you can see, the variable `s` is not defined because the moment the function `f()` is left, the garbage collector comes and eats the `s` variable, so it is not **globally** referenced. The same applies within *any indentation*, such as an `if` statement or `for` loop. Classes and ObjectsObjects are an encapsulation of variables and functions into a single entity. Objects get their variables and functions from classes. Classes are essentially a template to create your objects.A very basic class would look something like this:
###Code
class MyClass:
variable = "blah"
def function(self):
print("This is a message inside the class.")
###Output
_____no_output_____
###Markdown
We'll explain why you have to include that "self" as a parameter a little bit later. First, to assign the above class(template) to an object you would do the following:
###Code
myobjectx = MyClass()
###Output
_____no_output_____
###Markdown
Now the variable "myobjectx" holds an object of the class "MyClass" that contains the variable and the function defined within the class called "MyClass".To access the variable inside of the newly created object "myobjectx" you would do the following:
###Code
myobjectx.variable
###Output
_____no_output_____
###Markdown
You can create multiple different objects that are of the same class(have the same variables and functions defined). However, each object contains independent copies of the variables defined in the class. For instance, if we were to define another object with the "MyClass" class and then change the string in the variable above:
###Code
myobjectx = MyClass()
myobjecty = MyClass()
myobjecty.variable = "yackity"
# Then print out both values
print(myobjectx.variable)
print(myobjecty.variable)
###Output
_____no_output_____
###Markdown
To access a function inside of an object you use notation similar to accessing a variable:
###Code
myobjectx.function()
###Output
_____no_output_____
###Markdown
DictionariesA dictionary is a data type similar to arrays, but works with keys and values instead of indexes. Each value stored in a dictionary can be accessed using a key, which is any type of object (a string, a number, a list, etc.) instead of using its index to address it.For example, a database of phone numbers could be stored using a dictionary like this:
###Code
phonebook = {}
phonebook["John"] = 938477566
phonebook["Jack"] = 938377264
phonebook["Jill"] = 947662781
print(phonebook)
###Output
_____no_output_____
###Markdown
Alternatively, a dictionary can be initialized with the same values in the following notation:
###Code
phonebook = {
"John" : 938477566,
"Jack" : 938377264,
"Jill" : 947662781
}
print(phonebook)
###Output
_____no_output_____
###Markdown
Iterating over dictionariesDictionaries can be iterated over, just like a list. However, a dictionary, unlike a list, does not keep the order of the values stored in it. To iterate over key value pairs, use the following syntax:
###Code
phonebook = {"John" : 938477566,"Jack" : 938377264,"Jill" : 947662781}
for name, number in phonebook.items():
print("Phone number of %s is %d" % (name, number))
###Output
_____no_output_____
###Markdown
Removing a valueTo remove a specified index, use either one of the following notations:
###Code
phonebook = {
"John" : 938477566,
"Jack" : 938377264,
"Jill" : 947662781
}
del phonebook["John"]
print(phonebook)
phonebook = {
"John" : 938477566,
"Jack" : 938377264,
"Jill" : 947662781
}
phonebook.pop("John")
print(phonebook)
###Output
_____no_output_____
###Markdown
Operators on DictionariesOperators include:- __len(d)__: returns the number of stored entries, i.e the number of (key, value) pairs- __del d[k]__: deletes the key k with the value - __k in d__: True, if a key k exists in the dictionary d- __k not in d__: True, if a key k does not exist in the dictionary d
###Code
morse = {"A" : ".-", "B" : "-...", "C" : "-.-.", "D" : "-..", "E" : ".", "F" : "..-.",
"G" : "--.", "H" : "....", "I" : "..", "J" : ".---", "K" : "-.-", "L" : ".-..", "M" : "--",
"N" : "-.", "O" : "---", "P" : ".--.", "Q" : "--.-", "R" : ".-.", "S" : "...", "T" : "-", "U" : "..-",
"V" : "...-", "W" : ".--", "X" : "-..-", "Y" : "-.--", "Z" : "--..", "0" : "-----", "1" : ".----",
"2" : "..---", "3" : "...--", "4" : "....-", "5" : ".....", "6" : "-....", "7" : "--...",
"8" : "---..", "9" : "----.", "." : ".-.-.-", "," : "--..--"
}
len(morse)
###Output
_____no_output_____
###Markdown
We see that lowercase `a` does not exist in morse dict:
###Code
"a" in morse
###Output
_____no_output_____
###Markdown
..but returns `True` when negated:
###Code
"a" not in morse
###Output
_____no_output_____
###Markdown
Converting lists into a dictionaryLet's say we have two lists:1. The name of item of shopping2. The cost of each item in the shopbut they are in separate lists where the indices match up, we can trivially convert these lists into a `dict` using `zip` and `list` conversion:
###Code
item = ["Toothbrush","Hairbrush","Soap"]
cost = [3.00, 15.00, 1.00]
zip(item, cost)
d = dict(zip(item, cost))
d
###Output
_____no_output_____
###Markdown
Tasks Task 1Write a `for` loop that finds all of the numbers which are divisible by 3 but are not a multiple of 6 until you reach 200.
###Code
# your codes here
for i in range(200):
if i % 3 == 0 and i % 6 > 0:
print(i)
###Output
_____no_output_____
###Markdown
Task 2Create a function `factorial(int n)` which computes the factorial when it receives given integer as input. The factorial is the product of every number below it, as follows:$$F(n)=\prod_{i=1}^n i, \qquad n > 0$$Call the function 3 times, with inputs 4, 7 and 10. Print the outputs.
###Code
# your codes here
def factorial(n):
a = 1
for i in range(1, n):
a *= i
return a
print(factorial(4))
print(factorial(7))
print(factorial(10))
###Output
_____no_output_____
###Markdown
Task 3The fibonacci sequence is a series of numbers where the number at the current step is calculated from the summation of values at the previous two steps:$$x_{n} = x_{n-1} + x_{n-2} \\x_0 = 0 \\x_1 = 1$$or alternatively the closed form solution is given by:$$F(n)=\frac{\left(1+\sqrt{5}\right)^n-\left(1-\sqrt{5}\right)^n}{2^n\sqrt{5}}$$Create a function which calculates all of the fibonacci sequence numbers up to step $n$, returning $F(n)$. Do this using both the closed-form solution and using the step-wise method. Do this for 20 steps and print out both the sequences using closed and numeric.
###Code
# your codes here
import math
closed_list = []
method_list = []
def fib_closed(n):
return ((1+math.sqrt(5))**n - (1-math.sqrt(5))**n) / ((2**n) * math.sqrt(5))
def fib_method(n):
if n==0:
return 0
elif n == 1:
return 1
else:
return fib_method(n-1) + fib_method(n-2)
n=20
for i in range(n):
closed_list.append(fib_closed(i))
method_list.append(fib_method(i))
print(closed_list)
print(method_list)
###Output
_____no_output_____ |
src/Dissertation preprocess.ipynb | ###Markdown
Project Dissertation
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import nltk
import inflect
from nltk import word_tokenize, sent_tokenize
from nltk.corpus import stopwords
from nltk.stem import LancasterStemmer, WordNetLemmatizer
import sklearn
from sklearn.feature_extraction.text import CountVectorizer
import spacy
from nltk.tokenize.treebank import TreebankWordDetokenizer
from tqdm import tqdm, tqdm_notebook, tnrange
tqdm.pandas(desc='Progress')
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import sys
import os
import re
import string
import unicodedata
import itertools
from bs4 import BeautifulSoup
import unidecode
from word2number import w2n
print('Python version:',sys.version)
print('Pandas version:',pd.__version__)
df = pd.read_csv("C:/Users/Chan Ken Lok/Documents/Dissertations/depression-detection/data/tweets_combined.csv")
# insert the file path for the csv dataset
pd.set_option('display.max_colwidth', 1)
df.head()
# examples of the dataset
df.target.value_counts()
fig = plt.figure(figsize=(5,3))
ax = sns.barplot(x=df.target.unique(),y=df.target.value_counts());
ax.set(xlabel='Labels');
df.tweet.head(10), df.tweet.tail(10)
# check non-depressive tweets
df[df["target"]==0].tweet.head()
# check depressive tweets
df[df["target"]==1].tweet.head()
###Output
_____no_output_____
###Markdown
Pre process method
###Code
df['tweet'] = df.tweet.progress_apply(lambda x: re.sub('\n', ' ', x))
df['tweet'].head(5)
contraction_dict = {"ain't": "is not", "aren't": "are not","can't": "cannot", "'cause": "because", "could've": "could have", "couldn't": "could not", "didn't": "did not", "doesn't": "does not", "don't": "do not", "hadn't": "had not", "hasn't": "has not", "haven't": "have not", "he'd": "he would","he'll": "he will", "he's": "he is", "how'd": "how did", "how'd'y": "how do you", "how'll": "how will", "how's": "how is", "I'd": "I would", "I'd've": "I would have", "I'll": "I will", "I'll've": "I will have","I'm": "I am", "I've": "I have", "i'd": "i would", "i'd've": "i would have", "i'll": "i will", "i'll've": "i will have","i'm": "i am", "i've": "i have", "isn't": "is not", "it'd": "it would", "it'd've": "it would have", "it'll": "it will", "it'll've": "it will have","it's": "it is", "let's": "let us", "ma'am": "madam", "mayn't": "may not", "might've": "might have","mightn't": "might not","mightn't've": "might not have", "must've": "must have", "mustn't": "must not", "mustn't've": "must not have", "needn't": "need not", "needn't've": "need not have","o'clock": "of the clock", "oughtn't": "ought not", "oughtn't've": "ought not have", "shan't": "shall not", "sha'n't": "shall not", "shan't've": "shall not have", "she'd": "she would", "she'd've": "she would have", "she'll": "she will", "she'll've": "she will have", "she's": "she is", "should've": "should have", "shouldn't": "should not", "shouldn't've": "should not have", "so've": "so have","so's": "so as", "this's": "this is","that'd": "that would", "that'd've": "that would have", "that's": "that is", "there'd": "there would", "there'd've": "there would have", "there's": "there is", "here's": "here is","they'd": "they would", "they'd've": "they would have", "they'll": "they will", "they'll've": "they will have", "they're": "they are", "they've": "they have", "to've": "to have", "wasn't": "was not", "we'd": "we would", "we'd've": "we would have", "we'll": "we will", "we'll've": "we will have", "we're": "we are", "we've": "we have", "weren't": "were not", "what'll": "what will", "what'll've": "what will have", "what're": "what are", "what's": "what is", "what've": "what have", "when's": "when is", "when've": "when have", "where'd": "where did", "where's": "where is", "where've": "where have", "who'll": "who will", "who'll've": "who will have", "who's": "who is", "who've": "who have", "why's": "why is", "why've": "why have", "will've": "will have", "won't": "will not", "won't've": "will not have", "would've": "would have", "wouldn't": "would not", "wouldn't've": "would not have", "y'all": "you all", "y'all'd": "you all would","y'all'd've": "you all would have","y'all're": "you all are","y'all've": "you all have","you'd": "you would", "you'd've": "you would have", "you'll": "you will", "you'll've": "you will have", "you're": "you are", "you've": "you have"}
#contraction dictionary, could add more since the contraction module could not be import
#to replace the contraction in the texts
def get_contractions_dict(contraction_dict):
contraction_re = re.compile('(%s)' % '|'.join(contraction_dict.keys()))
return contraction_dict, contraction_re
contractions, contractions_re = get_contractions_dict(contraction_dict)
def replace_contractions(text):
def replace(match):
return contractions[match.group(0)]
return contractions_re.sub(replace, text)
##List of method to preprocess the words
def remove_non_ascii(words):
"""Remove non-ASCII characters from list of tokenized words"""
new_words = []
for word in words:
new_word = unicodedata.normalize('NFKD', word).encode('ascii', 'ignore').decode('utf-8', 'ignore')
new_words.append(new_word)
return new_words
def to_lowercase(words):
"""Convert all characters to lowercase from list of tokenized words"""
new_words = []
for word in words:
new_word = word.lower()
new_words.append(new_word)
return new_words
def remove_punctuation(words):
"""Remove punctuation from list of tokenized words"""
new_words = []
for word in words:
new_word = re.sub(r'[^\w\s]', '', word)
if new_word != '':
new_words.append(new_word)
return new_words
def replace_numbers(words):
"""Replace all interger occurrences in list of tokenized words with textual representation"""
p = inflect.engine()
new_words = []
for word in words:
if word.isdigit():
new_word = p.number_to_words(word)
new_words.append(new_word)
else:
new_words.append(word)
return new_words
def remove_stopwords(words):
"""Remove stop words from list of tokenized words"""
new_words = []
for word in words:
if word not in stopwords.words('english'):
new_words.append(word)
return new_words
def lemmatize_verbs(words):
"""Lemmatize verbs in list of tokenized words"""
lemmatizer = WordNetLemmatizer()
lemmas = []
for word in words:
lemma = lemmatizer.lemmatize(word, pos='v')
lemmas.append(lemma)
return lemmas
def normalize(words):
words = remove_non_ascii(words)
words = to_lowercase(words)
words = remove_punctuation(words)
words = replace_numbers(words)
words = remove_stopwords(words)
words = lemmatize_verbs(words)
return words
def tweet_clean(words):
words = re.sub(r"http\S+", "", words)# remove urls c
words = re.sub(r'<([^>]*)>', ' ', words) # remove emojis c
words = re.sub(r"pic\S+", "", words)# maybe remove pictures
words = re.sub(r'@\w+', ' ', words) # remove at mentions c
words = re.sub(r'#\w+', ' ', words) # remove hashtag symbol c
words = replace_contractions(words) # no need change method c
pattern = re.compile(r"[ \n\t]+")
words = pattern.sub(" ", words)
words = "".join("".join(s)[:2] for _, s in itertools.groupby(words)) # ???
words = re.sub(r'[^A-Za-z0-9,?.!]+', ' ', words) # remove all symbols and punctuation except for . , ! and ? might change the process
return words.strip()
def preprocess(sample):
# Tokenize
words = word_tokenize(sample)
# Normalize
return normalize(words)
#preprocess the word
def tokenizer(s):
word = tweet_clean(s)
return preprocess(word)
%%time
df['tweet'] = df['tweet'].apply(lambda x : tokenizer(x))
df['tweet'] = df['tweet'].apply(lambda x : TreebankWordDetokenizer().detokenize(x))
#tokenize to preprocessed and then detokenize to get word count
df['tweet'].head(3)
#example of the processed word
###Output
_____no_output_____
###Markdown
Seperating into three types of data
###Code
def split_train_test(df, test_size=0.2):
train, val = train_test_split(df, test_size=test_size,random_state=42)
return train.reset_index(drop=True), val.reset_index(drop=True)
# create train and validation set
# 20% of the total is test, while the 80 would split to 80 train adn 20 validation
train_val, test = split_train_test(df, test_size=0.2)
train, val = split_train_test(train_val, test_size=0.2)
train.to_csv("train.csv", index=False)
val.to_csv("val.csv", index=False)
test.to_csv("test.csv", index=False)
train.shape, val.shape, test.shape
fig = plt.figure(figsize=(10,4))
fig.subplots_adjust(hspace=0.4, wspace=0.4)
ax = fig.add_subplot(1,3,1)
ax = sns.barplot(x=train.target.unique(),y=train.target.value_counts())
ax.set(xlabel='Labels', ylabel="counts", title="train")
ax1 = fig.add_subplot(1,3,2)
ax1 = sns.barplot(x=val.target.unique(),y=val.target.value_counts())
ax1.set(xlabel='Labels', ylabel="counts", title="validation")
ax2 = fig.add_subplot(1,3,3)
ax2 = sns.barplot(x=test.target.unique(),y=test.target.value_counts())
ax2.set(xlabel='Labels', ylabel="counts", title="test")
val.head(3)
train.head(3)
###Output
_____no_output_____
###Markdown
For target that is 0
###Code
nondepressed_docs = [row['tweet'] for index,row in train.iterrows() if row['target'] == 0 ]
vec_0 = CountVectorizer()
X_0 = vec_0.fit_transform(nondepressed_docs)
tdm_0 = pd.DataFrame(X_0.toarray(), columns=vec_0.get_feature_names())
word_list_0 = vec_0.get_feature_names();
count_list_0 = X_0.toarray().sum(axis=0)
freq_0 = dict(zip(word_list_0,count_list_0))
probs_0 = []
for word,count in zip(word_list_0,count_list_0):
probs_0.append(count/len(word_list_0))
res_0 = dict(zip(word_list_0,probs_0))
docs = [row['tweet'] for index,row in train.iterrows()]
vec = CountVectorizer()
X = vec.fit_transform(docs)
total_features = len(vec.get_feature_names())
total_cnts_features_0 = count_list_0.sum(axis=0)
###Output
_____no_output_____
###Markdown
Method for making dict new to compare
###Code
def create_dict_test_0(word_list,freq,total_cnts_features):
prob_s_with_ls = []
for word in word_list:
if word in freq.keys():
count = freq[word]
else:
count = 0
prob_s_with_ls.append((count + 1)/(total_cnts_features + total_features))
return dict(zip(new_word_list,prob_s_with_ls))
###Output
_____no_output_____
###Markdown
For target is 1
###Code
depressed_docs = [row['tweet'] for index,row in train.iterrows() if row['target'] == 1 ]
vec_1 = CountVectorizer()
X_1 = vec_1.fit_transform(depressed_docs)
tdm_1 = pd.DataFrame(X_1.toarray(), columns=vec_1.get_feature_names())
word_list_1 = vec_1.get_feature_names();
count_list_1 = X_1.toarray().sum(axis=0)
freq_1 = dict(zip(word_list_1,count_list_1))
probs_1 = []
for word,count in zip(word_list_1,count_list_1):
probs_1.append(count/len(word_list_1))
res_1 = dict(zip(word_list_1,probs_1))
total_cnts_features_1 = count_list_1.sum(axis=0)
print(total_cnts_features_1)
###Output
4657
###Markdown
Setting up result with training data
###Code
new_sentence = 'cut depress sad'
new_word_list = word_tokenize(new_sentence)
def show_new_dict(wordList, freq, total_cnts_features):
prob_s_with_ls = []
for word in wordList:
if word in freq.keys():
count = freq[word]
else:
count = 0
prob_s_with_ls.append((count + 1)/(total_cnts_features + total_features))
return dict(zip(wordList,prob_s_with_ls))
t = show_new_dict(new_word_list,freq_0,total_cnts_features_0)
t.values()
np.prod(list(t.values()))
def compare_values(a,b):
if a > b :
return 0
elif b > a :
return 1
t = train[train['target'] == 0]
t.shape
train.shape[0]
def predict_class(wordlist):
t_0 = show_new_dict(wordlist,freq_0,total_cnts_features_0)
t_1 = show_new_dict(wordlist,freq_1,total_cnts_features_1)
t_0_val = np.prod(list(t_0.values()))
t_1_val = np.prod(list(t_1.values()))
t_0_shape = (train[train['target'] == 0]).shape[0]
t_1_shape = (train[train['target'] == 1]).shape[0]
t_all_shape = train.shape[0]
prob_0 = t_0_val * (t_0_shape / t_all_shape)
prob_1 = t_1_val * (t_1_shape / t_all_shape)
return compare_values(prob_0,prob_1)
sample = val['tweet'].iloc[0]
print(predict_class(word_tokenize(sample)))
###Output
0
###Markdown
To validate the system
###Code
def predict_to_pd(file):
temp_list = []
for x in file['tweet']:
result = predict_class(word_tokenize(x))
temp_list.append(result)
return temp_list
##predict_to_pd(val)
target_val = [row['target'] for index,row in val.iterrows()]
##print(target_val)
###Output
_____no_output_____
###Markdown
Checking the score
###Code
def accuracy_score_naive_bayes(predictList,actualList):
i = 0
for x in range(len(actualList)):
if predictList[x] == actualList[x]:
i = i + 1
else:
i
score = i / len(actualList)
return 100*score
print(accuracy_score_naive_bayes(predict_to_pd(val),target_val))
def precision_score_0_naive_bayes(predictList,actualList):
tp = 0
fp = 0
for x in range(len(actualList)):
if (actualList[x] == 0) and (actualList[x] == predictList[x]):
tp = tp + 1
elif (actualList[x] == 1) and (predictList[x] == 0):
fp = fp + 1
score = tp / (tp+fp)
return 100*score
def precision_score_1_naive_bayes(predictList,actualList):
tp = 0
fp = 0
for x in range(len(actualList)):
if (actualList[x] == 1) and (actualList[x] == predictList[x]):
tp = tp + 1
elif (actualList[x] == 0) and (predictList[x] == 1):
fp = fp + 1
score = tp / (tp+fp)
return 100*score
def recall_score_0_naive_bayes(predictList,actualList):
tp = 0
fn = 0
for x in range(len(actualList)):
if (actualList[x] == 0) and (actualList[x] == predictList[x]):
tp = tp + 1
elif (actualList[x] == 0) and (predictList[x] == 1):
fn = fn + 1
score = tp / (tp+fn)
return 100*score
def recall_score_1_naive_bayes(predictList,actualList):
tp = 0
fn = 0
for x in range(len(actualList)):
if (actualList[x] == 1) and (actualList[x] == predictList[x]):
tp = tp + 1
elif (actualList[x] == 1) and (predictList[x] == 0):
fn = fn + 1
score = tp / (tp+fn)
return 100*score
def f_measure_naive_bayes(precision,recall):
result = 2 * (precision * recall) / (precision + recall)
return result
print("precision score of 0 :",precision_score_0_naive_bayes(predict_to_pd(val),target_val))
print("precision socre of 1 :",precision_score_1_naive_bayes(predict_to_pd(val),target_val))
print("recall score of 0 :",recall_score_0_naive_bayes(predict_to_pd(val),target_val))
print("recall socre of 1 :",recall_score_1_naive_bayes(predict_to_pd(val),target_val))
print("f measure 0 score:",f_measure_naive_bayes(precision_score_0_naive_bayes(predict_to_pd(val),target_val),recall_score_0_naive_bayes(predict_to_pd(val),target_val)))
print("f measure 1 score:",f_measure_naive_bayes(precision_score_0_naive_bayes(predict_to_pd(val),target_val),recall_score_1_naive_bayes(predict_to_pd(val),target_val)))
###Output
precision score of 0 : 81.23249299719888
precision socre of 1 : 40.64516129032258
recall score of 0 : 75.91623036649214
recall socre of 1 : 48.46153846153846
f measure 0 score: 78.48443843031122
f measure 1 score: 60.706750178598135
###Markdown
all the above is own method which could be taken into consideration when creating own method, however these are not valid and it could not be proven correct. Naive and SVM classifications
###Code
from nltk import pos_tag
from sklearn.preprocessing import LabelEncoder
from collections import defaultdict
from nltk.corpus import wordnet as wn
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import model_selection, naive_bayes, svm
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
Encoder = LabelEncoder()
Tfidf_vect = TfidfVectorizer(max_features=5000)
Tfidf_vect.fit(train['tweet'])
print(Tfidf_vect.vocabulary_)
Train_Y = Encoder.fit_transform(train['target'])
Test_Y = Encoder.fit_transform(val['target'])
Train_X_Tfidf = Tfidf_vect.transform(train['tweet'])
Test_X_Tfidf = Tfidf_vect.transform(val['tweet'])
print(Train_X_Tfidf)
# fit the training dataset on the NB classifier
Naive = naive_bayes.MultinomialNB()
Naive.fit(Train_X_Tfidf,Train_Y)
# predict the labels on validation dataset
predictions_NB = Naive.predict(Test_X_Tfidf)
# Use accuracy_score function to get the accuracy
print("Naive Bayes Accuracy Score -> ",accuracy_score(predictions_NB, Test_Y)*100)
print("Naive Bayes f1 measure Score -> ",f1_score(predictions_NB, Test_Y)*100)
print("Naive Bayes precision Score -> ",precision_score(predictions_NB, Test_Y)*100)
print("Naive Bayes recall Score -> ",recall_score(predictions_NB, Test_Y)*100)
# Classifier - Algorithm - SVM
# fit the training dataset on the classifier
SVM = svm.SVC(C=1.0, kernel='linear', degree=3, gamma='auto')
SVM.fit(Train_X_Tfidf,Train_Y)
# predict the labels on validation dataset
predictions_SVM = SVM.predict(Test_X_Tfidf)
# Use accuracy_score function to get the accuracy
print("SVM Accuracy Score -> ",accuracy_score(predictions_SVM, Test_Y)*100)
print("SVM f1 measure Score -> ",f1_score(predictions_SVM, Test_Y)*100)
print("SVM precision Score -> ",precision_score(predictions_SVM, Test_Y)*100)
print("SVM recall Score -> ",recall_score(predictions_SVM, Test_Y)*100)
fig = plt.figure(figsize=(10,5))
fig.subplots_adjust(hspace=0.4, wspace=0.4)
class_label = ['Naive Bayes', 'SVM']
plt.suptitle('Categorical Comparison Plot')
ax = fig.add_subplot(1,4,1)
plt.bar(class_label, [accuracy_score(predictions_NB, Test_Y)*100,accuracy_score(predictions_SVM, Test_Y)*100], color=['blue', 'orange'])
ax.set_ylim([0,100])
ax.set(xlabel='Models', ylabel="scores", title="accuracy")
ay = fig.add_subplot(1,4,2)
plt.bar(class_label, [precision_score(predictions_NB, Test_Y)*100,precision_score(predictions_SVM, Test_Y)*100], color=['blue', 'orange'])
ay.set_ylim([0,100])
ay.set(xlabel='Models', ylabel="scores", title="precision")
az = fig.add_subplot(1,4,3)
plt.bar(class_label, [recall_score(predictions_NB, Test_Y)*100,recall_score(predictions_SVM, Test_Y)*100], color=['blue', 'orange'])
az.set_ylim([0,100])
az.set(xlabel='Models', ylabel="scores", title="recall")
bx = fig.add_subplot(1,4,4)
plt.bar(class_label, [f1_score(predictions_NB, Test_Y)*100,f1_score(predictions_SVM, Test_Y)*100], color=['blue', 'orange'])
bx.set_ylim([0,100])
bx.set(xlabel='Models', ylabel="scores", title="F measures")
###Output
_____no_output_____
###Markdown
Test set and train set with naive and svm classifier
###Code
Test_X_Tfidf_f = Tfidf_vect.transform(test['tweet'])
Test_Y_f = Encoder.fit_transform(test['target'])
Naive.fit(Train_X_Tfidf,Train_Y)
# predict the labels on validation dataset
predictions_NB = Naive.predict(Test_X_Tfidf_f)
SVM.fit(Train_X_Tfidf,Train_Y)
# predict the labels on validation dataset
predictions_SVM = SVM.predict(Test_X_Tfidf_f)
fig = plt.figure(figsize=(10,5))
fig.subplots_adjust(hspace=0.4, wspace=0.4)
class_label = ['Naive Bayes', 'SVM']
plt.suptitle('Categorical Comparison Plot')
ax = fig.add_subplot(1,4,1)
plt.bar(class_label, [accuracy_score(predictions_NB, Test_Y_f)*100,accuracy_score(predictions_SVM, Test_Y_f)*100], color=['blue', 'orange'])
ax.set_ylim([0,100])
ax.set(xlabel='Models', ylabel="scores", title="accuracy")
ay = fig.add_subplot(1,4,2)
plt.bar(class_label, [precision_score(predictions_NB, Test_Y_f)*100,precision_score(predictions_SVM, Test_Y_f)*100], color=['blue', 'orange'])
ay.set_ylim([0,100])
ay.set(xlabel='Models', ylabel="scores", title="precision")
az = fig.add_subplot(1,4,3)
plt.bar(class_label, [recall_score(predictions_NB, Test_Y_f)*100,recall_score(predictions_SVM, Test_Y_f)*100], color=['blue', 'orange'])
az.set_ylim([0,100])
az.set(xlabel='Models', ylabel="scores", title="recall")
bx = fig.add_subplot(1,4,4)
plt.bar(class_label, [f1_score(predictions_NB, Test_Y_f)*100,f1_score(predictions_SVM, Test_Y_f)*100], color=['blue', 'orange'])
bx.set_ylim([0,100])
bx.set(xlabel='Models', ylabel="scores", title="F measures")
print("Naive Bayes Accuracy Score -> ",accuracy_score(predictions_NB, Test_Y_f)*100)
print("Naive Bayes f1 measure Score -> ",f1_score(predictions_NB, Test_Y_f)*100)
print("Naive Bayes precision Score -> ",precision_score(predictions_NB, Test_Y_f)*100)
print("Naive Bayes recall Score -> ",recall_score(predictions_NB, Test_Y_f)*100)
print("SVM Accuracy Score -> ",accuracy_score(predictions_SVM, Test_Y_f)*100)
print("SVM f1 measure Score -> ",f1_score(predictions_SVM, Test_Y_f)*100)
print("SVM precision Score -> ",precision_score(predictions_SVM, Test_Y_f)*100)
print("SVM recall Score -> ",recall_score(predictions_SVM, Test_Y_f)*100)
###Output
SVM Accuracy Score -> 77.8125
SVM f1 measure Score -> 48.17518248175182
SVM precision Score -> 42.30769230769231
SVM recall Score -> 55.932203389830505
|
ImageClassifierProject-Copy1.ipynb | ###Markdown
Developing an AI applicationGoing forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications. In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below. The project is broken down into multiple steps:* Load and preprocess the image dataset* Train the image classifier on your dataset* Use the trained classifier to predict image contentWe'll lead you through each part which you'll implement in Python.When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here.
###Code
# Imports here
import torch
import matplotlib
import matplotlib.pyplot as plt
import torch.nn.functional as F
import numpy as np
import time
from torch import nn
from torch import optim
from torchvision import datasets, transforms, models
from PIL import Image
###Output
_____no_output_____
###Markdown
Load the dataHere you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks.The validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size.The pre-trained networks you'll use were trained on the ImageNet dataset where each color channel was normalized separately. For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1.
###Code
data_dir = 'flowers'
train_dir = data_dir + '/train'
valid_dir = data_dir + '/valid'
test_dir = data_dir + '/test'
# TODO: Define your transforms for the training, validation, and testing sets
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
test_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
validation_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
# TODO: Load the datasets with ImageFolder
train_data = datasets.ImageFolder(train_dir, transform=train_transforms)
test_data = datasets.ImageFolder(test_dir, transform=test_transforms)
validation_data = datasets.ImageFolder(valid_dir, transform=validation_transforms)
# TODO: Using the image datasets and the trainforms, define the dataloaders
trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
testloader = torch.utils.data.DataLoader(test_data, batch_size=64)
validloader = torch.utils.data.DataLoader(validation_data, batch_size=64)
###Output
_____no_output_____
###Markdown
Label mappingYou'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers.
###Code
import json
with open('cat_to_name.json', 'r') as f:
cat_to_name = json.load(f)
###Output
_____no_output_____
###Markdown
Building and training the classifierNow that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features.We're going to leave this part up to you. Refer to [the rubric](https://review.udacity.com/!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do:* Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use)* Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout* Train the classifier layers using backpropagation using the pre-trained network to get the features* Track the loss and accuracy on the validation set to determine the best hyperparametersWe've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project.One last important tip if you're using the workspace to run your code: To avoid having your workspace disconnect during the long-running tasks in this notebook, please read in the earlier page in this lesson called Intro toGPU Workspaces about Keeping Your Session Active. You'll want to include code from the workspace_utils.py module.**Note for Workspace users:** If your network is over 1 GB when saved as a checkpoint, there might be issues with saving backups in your workspace. Typically this happens with wide dense layers after the convolutional layers. If your saved checkpoint is larger than 1 GB (you can open a terminal and check with `ls -lh`), you should reduce the size of your hidden layers and train again.
###Code
# Load a pre-trained network
model = models.vgg16(pretrained=True)
print(model)
# Train the classifier layers using backpropagation using the pre-trained network to get the features
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
model.classifier = nn.Sequential((nn.Dropout(0.5)),
(nn.Linear(25088, 120)),
(nn.ReLU()),
(nn.Linear(120, 90)),
(nn.ReLU()),
(nn.Linear(90,80)),
(nn.ReLU()),
(nn.Linear(80,102)),
(nn.LogSoftmax(dim=1)))
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)
model.to(device);
#Track the loss and accuracy on the validation set to determine the best hyperparameters
epochs = 9
steps = 0
running_loss = 0
print_every = 5
for epoch in range(epochs):
for inputs, labels in trainloader:
steps += 1
# Move input and label tensors to the default device
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
logps = model.forward(inputs)
loss = criterion(logps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
test_loss = 0
accuracy = 0
model.eval()
with torch.no_grad():
for inputs, labels in validloader:
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
batch_loss = criterion(logps, labels)
test_loss += batch_loss.item()
# Calculate accuracy
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
print(f"Epoch {epoch+1}/{epochs}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Validation loss: {test_loss/len(testloader):.3f}.. "
f"Accuracy: {accuracy/len(testloader):.3f}")
running_loss = 0
model.train()
###Output
Epoch 1/9.. Train loss: 4.611.. Validation loss: 4.562.. Accuracy: 0.022
Epoch 1/9.. Train loss: 4.514.. Validation loss: 4.458.. Accuracy: 0.058
Epoch 1/9.. Train loss: 4.419.. Validation loss: 4.341.. Accuracy: 0.076
Epoch 1/9.. Train loss: 4.310.. Validation loss: 4.196.. Accuracy: 0.120
Epoch 1/9.. Train loss: 4.275.. Validation loss: 4.043.. Accuracy: 0.158
Epoch 1/9.. Train loss: 4.077.. Validation loss: 3.896.. Accuracy: 0.157
Epoch 1/9.. Train loss: 3.977.. Validation loss: 3.671.. Accuracy: 0.183
Epoch 1/9.. Train loss: 3.901.. Validation loss: 3.481.. Accuracy: 0.257
Epoch 1/9.. Train loss: 3.720.. Validation loss: 3.223.. Accuracy: 0.292
Epoch 1/9.. Train loss: 3.457.. Validation loss: 3.043.. Accuracy: 0.285
Epoch 1/9.. Train loss: 3.011.. Validation loss: 2.804.. Accuracy: 0.316
Epoch 1/9.. Train loss: 3.033.. Validation loss: 2.649.. Accuracy: 0.400
Epoch 1/9.. Train loss: 2.958.. Validation loss: 2.460.. Accuracy: 0.409
Epoch 1/9.. Train loss: 2.834.. Validation loss: 2.428.. Accuracy: 0.389
Epoch 1/9.. Train loss: 2.508.. Validation loss: 2.285.. Accuracy: 0.431
Epoch 1/9.. Train loss: 2.583.. Validation loss: 2.098.. Accuracy: 0.486
Epoch 1/9.. Train loss: 2.612.. Validation loss: 2.162.. Accuracy: 0.444
Epoch 1/9.. Train loss: 2.544.. Validation loss: 1.949.. Accuracy: 0.497
Epoch 1/9.. Train loss: 2.357.. Validation loss: 1.829.. Accuracy: 0.507
Epoch 1/9.. Train loss: 2.353.. Validation loss: 1.870.. Accuracy: 0.492
Epoch 2/9.. Train loss: 2.267.. Validation loss: 1.783.. Accuracy: 0.518
Epoch 2/9.. Train loss: 2.110.. Validation loss: 1.773.. Accuracy: 0.532
Epoch 2/9.. Train loss: 2.034.. Validation loss: 1.714.. Accuracy: 0.550
Epoch 2/9.. Train loss: 2.112.. Validation loss: 1.652.. Accuracy: 0.547
Epoch 2/9.. Train loss: 1.955.. Validation loss: 1.619.. Accuracy: 0.554
Epoch 2/9.. Train loss: 1.749.. Validation loss: 1.472.. Accuracy: 0.592
Epoch 2/9.. Train loss: 2.035.. Validation loss: 1.426.. Accuracy: 0.602
Epoch 2/9.. Train loss: 1.739.. Validation loss: 1.327.. Accuracy: 0.646
Epoch 2/9.. Train loss: 1.892.. Validation loss: 1.387.. Accuracy: 0.625
Epoch 2/9.. Train loss: 1.697.. Validation loss: 1.387.. Accuracy: 0.617
Epoch 2/9.. Train loss: 1.600.. Validation loss: 1.333.. Accuracy: 0.628
Epoch 2/9.. Train loss: 1.802.. Validation loss: 1.301.. Accuracy: 0.645
Epoch 2/9.. Train loss: 1.631.. Validation loss: 1.223.. Accuracy: 0.666
Epoch 2/9.. Train loss: 1.783.. Validation loss: 1.313.. Accuracy: 0.640
Epoch 2/9.. Train loss: 1.762.. Validation loss: 1.270.. Accuracy: 0.642
Epoch 2/9.. Train loss: 1.647.. Validation loss: 1.203.. Accuracy: 0.669
Epoch 2/9.. Train loss: 1.643.. Validation loss: 1.199.. Accuracy: 0.669
Epoch 2/9.. Train loss: 1.541.. Validation loss: 1.146.. Accuracy: 0.665
Epoch 2/9.. Train loss: 1.621.. Validation loss: 1.051.. Accuracy: 0.710
Epoch 2/9.. Train loss: 1.664.. Validation loss: 1.106.. Accuracy: 0.693
Epoch 2/9.. Train loss: 1.672.. Validation loss: 1.077.. Accuracy: 0.706
Epoch 3/9.. Train loss: 1.405.. Validation loss: 1.078.. Accuracy: 0.683
Epoch 3/9.. Train loss: 1.403.. Validation loss: 1.023.. Accuracy: 0.699
Epoch 3/9.. Train loss: 1.549.. Validation loss: 1.096.. Accuracy: 0.694
Epoch 3/9.. Train loss: 1.605.. Validation loss: 1.039.. Accuracy: 0.706
Epoch 3/9.. Train loss: 1.533.. Validation loss: 1.014.. Accuracy: 0.713
Epoch 3/9.. Train loss: 1.281.. Validation loss: 0.981.. Accuracy: 0.722
Epoch 3/9.. Train loss: 1.385.. Validation loss: 1.066.. Accuracy: 0.685
Epoch 3/9.. Train loss: 1.412.. Validation loss: 1.047.. Accuracy: 0.719
Epoch 3/9.. Train loss: 1.267.. Validation loss: 0.945.. Accuracy: 0.736
Epoch 3/9.. Train loss: 1.416.. Validation loss: 0.901.. Accuracy: 0.751
Epoch 3/9.. Train loss: 1.341.. Validation loss: 0.900.. Accuracy: 0.745
Epoch 3/9.. Train loss: 1.348.. Validation loss: 0.989.. Accuracy: 0.720
Epoch 3/9.. Train loss: 1.414.. Validation loss: 0.868.. Accuracy: 0.756
Epoch 3/9.. Train loss: 1.322.. Validation loss: 1.000.. Accuracy: 0.709
Epoch 3/9.. Train loss: 1.375.. Validation loss: 0.992.. Accuracy: 0.732
Epoch 3/9.. Train loss: 1.352.. Validation loss: 0.935.. Accuracy: 0.740
Epoch 3/9.. Train loss: 1.328.. Validation loss: 0.891.. Accuracy: 0.741
Epoch 3/9.. Train loss: 1.294.. Validation loss: 0.880.. Accuracy: 0.754
Epoch 3/9.. Train loss: 1.295.. Validation loss: 0.846.. Accuracy: 0.758
Epoch 3/9.. Train loss: 1.343.. Validation loss: 0.784.. Accuracy: 0.778
Epoch 4/9.. Train loss: 1.167.. Validation loss: 0.847.. Accuracy: 0.759
Epoch 4/9.. Train loss: 1.108.. Validation loss: 0.860.. Accuracy: 0.763
Epoch 4/9.. Train loss: 1.202.. Validation loss: 0.806.. Accuracy: 0.775
Epoch 4/9.. Train loss: 1.167.. Validation loss: 0.867.. Accuracy: 0.742
Epoch 4/9.. Train loss: 1.084.. Validation loss: 0.896.. Accuracy: 0.742
Epoch 4/9.. Train loss: 1.220.. Validation loss: 0.817.. Accuracy: 0.770
Epoch 4/9.. Train loss: 1.204.. Validation loss: 0.778.. Accuracy: 0.792
Epoch 4/9.. Train loss: 1.122.. Validation loss: 0.768.. Accuracy: 0.782
Epoch 4/9.. Train loss: 1.174.. Validation loss: 0.777.. Accuracy: 0.776
Epoch 4/9.. Train loss: 1.105.. Validation loss: 0.735.. Accuracy: 0.786
Epoch 4/9.. Train loss: 1.256.. Validation loss: 0.807.. Accuracy: 0.767
Epoch 4/9.. Train loss: 1.125.. Validation loss: 0.802.. Accuracy: 0.769
Epoch 4/9.. Train loss: 1.178.. Validation loss: 0.742.. Accuracy: 0.779
Epoch 4/9.. Train loss: 1.022.. Validation loss: 0.762.. Accuracy: 0.791
Epoch 4/9.. Train loss: 1.081.. Validation loss: 0.740.. Accuracy: 0.783
Epoch 4/9.. Train loss: 1.102.. Validation loss: 0.722.. Accuracy: 0.796
Epoch 4/9.. Train loss: 1.358.. Validation loss: 0.709.. Accuracy: 0.794
Epoch 4/9.. Train loss: 1.113.. Validation loss: 0.765.. Accuracy: 0.772
Epoch 4/9.. Train loss: 1.258.. Validation loss: 0.732.. Accuracy: 0.789
Epoch 4/9.. Train loss: 1.312.. Validation loss: 0.690.. Accuracy: 0.800
Epoch 4/9.. Train loss: 1.076.. Validation loss: 0.698.. Accuracy: 0.786
Epoch 5/9.. Train loss: 1.137.. Validation loss: 0.739.. Accuracy: 0.787
Epoch 5/9.. Train loss: 1.110.. Validation loss: 0.699.. Accuracy: 0.800
Epoch 5/9.. Train loss: 1.081.. Validation loss: 0.713.. Accuracy: 0.794
Epoch 5/9.. Train loss: 1.097.. Validation loss: 0.701.. Accuracy: 0.798
Epoch 5/9.. Train loss: 1.152.. Validation loss: 0.723.. Accuracy: 0.798
Epoch 5/9.. Train loss: 1.109.. Validation loss: 0.690.. Accuracy: 0.807
Epoch 5/9.. Train loss: 1.164.. Validation loss: 0.704.. Accuracy: 0.783
Epoch 5/9.. Train loss: 0.962.. Validation loss: 0.673.. Accuracy: 0.804
Epoch 5/9.. Train loss: 1.148.. Validation loss: 0.687.. Accuracy: 0.798
Epoch 5/9.. Train loss: 1.017.. Validation loss: 0.676.. Accuracy: 0.801
Epoch 5/9.. Train loss: 0.984.. Validation loss: 0.643.. Accuracy: 0.820
Epoch 5/9.. Train loss: 0.953.. Validation loss: 0.679.. Accuracy: 0.801
Epoch 5/9.. Train loss: 1.039.. Validation loss: 0.698.. Accuracy: 0.796
Epoch 5/9.. Train loss: 1.046.. Validation loss: 0.640.. Accuracy: 0.802
Epoch 5/9.. Train loss: 1.150.. Validation loss: 0.674.. Accuracy: 0.815
Epoch 5/9.. Train loss: 1.105.. Validation loss: 0.625.. Accuracy: 0.811
Epoch 5/9.. Train loss: 1.098.. Validation loss: 0.617.. Accuracy: 0.820
Epoch 5/9.. Train loss: 1.136.. Validation loss: 0.648.. Accuracy: 0.822
Epoch 5/9.. Train loss: 0.871.. Validation loss: 0.591.. Accuracy: 0.836
Epoch 5/9.. Train loss: 1.009.. Validation loss: 0.654.. Accuracy: 0.815
Epoch 5/9.. Train loss: 1.018.. Validation loss: 0.619.. Accuracy: 0.818
Epoch 6/9.. Train loss: 0.760.. Validation loss: 0.680.. Accuracy: 0.803
Epoch 6/9.. Train loss: 0.953.. Validation loss: 0.649.. Accuracy: 0.814
Epoch 6/9.. Train loss: 0.985.. Validation loss: 0.621.. Accuracy: 0.824
Epoch 6/9.. Train loss: 1.086.. Validation loss: 0.585.. Accuracy: 0.827
Epoch 6/9.. Train loss: 1.068.. Validation loss: 0.587.. Accuracy: 0.835
Epoch 6/9.. Train loss: 0.865.. Validation loss: 0.664.. Accuracy: 0.803
Epoch 6/9.. Train loss: 0.895.. Validation loss: 0.658.. Accuracy: 0.803
Epoch 6/9.. Train loss: 0.984.. Validation loss: 0.635.. Accuracy: 0.817
Epoch 6/9.. Train loss: 0.870.. Validation loss: 0.605.. Accuracy: 0.815
###Markdown
Testing your networkIt's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well.
###Code
# TODO: Do validation on the test set
def check_test_acc(testloader):
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to(device), labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy on test images %d %%' % (100 * correct / total))
check_test_acc(testloader)
###Output
Accuracy on test images 80 %
###Markdown
Save the checkpointNow that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on.```model.class_to_idx = image_datasets['train'].class_to_idx```Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now.
###Code
# TODO: Save the checkpoint
model.class_to_idx = train_data.class_to_idx
torch.save({
'epochs': epochs,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'class_to_idx':model.class_to_idx
}, 'checkpoint.pt')
###Output
_____no_output_____
###Markdown
Loading the checkpointAt this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network.
###Code
# TODO: Write a function that loads a checkpoint and rebuilds the model
def load_model(path):
checkpoint = torch.load(path)
model.class_to_idx = checkpoint['class_to_idx']
model.load_state_dict(checkpoint['model_state_dict'])
epochs = checkpoint['epochs']
load_model('checkpoint.pt')
print(model)
###Output
VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
(classifier): Sequential(
(0): Dropout(p=0.5)
(1): Linear(in_features=25088, out_features=120, bias=True)
(2): ReLU()
(3): Linear(in_features=120, out_features=90, bias=True)
(4): ReLU()
(5): Linear(in_features=90, out_features=80, bias=True)
(6): ReLU()
(7): Linear(in_features=80, out_features=102, bias=True)
(8): LogSoftmax()
)
)
###Markdown
Inference for classificationNow you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like ```pythonprobs, classes = predict(image_path, model)print(probs)print(classes)> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]> ['70', '3', '45', '62', '55']```First you'll need to handle processing the input image such that it can be used in your network. Image PreprocessingYou'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training. First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.htmlPIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.htmlPIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image.Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`.As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation. And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions.
###Code
def process_image(image):
image_pil = Image.open(image)
adjustments = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
image_tensor = adjustments(image_pil)
return image_tensor
# TODO: Process a PIL image for use in a PyTorch model
img = (data_dir + '/test' + '/1/' + 'image_06752.jpg')
img = process_image(img)
print(img.shape)
###Output
torch.Size([3, 224, 224])
###Markdown
To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions).
###Code
def imshow(image, ax=None, title=None):
"""Imshow for Tensor."""
if ax is None:
fig, ax = plt.subplots()
# PyTorch tensors assume the color channel is the first dimension
# but matplotlib assumes is the third dimension
image = image.numpy().transpose((1, 2, 0))
# Undo preprocessing
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = std * image + mean
# Image needs to be clipped between 0 and 1 or it looks like noise when displayed
image = np.clip(image, 0, 1)
ax.imshow(image)
return ax
imshow(process_image("flowers/test/1/image_06743.jpg"))
###Output
_____no_output_____
###Markdown
Class PredictionOnce you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values.To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.htmltorch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well.Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes.```pythonprobs, classes = predict(image_path, model)print(probs)print(classes)> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]> ['70', '3', '45', '62', '55']```
###Code
def predict(image_path, model, topk=5):
# TODO: Implement the code to predict the class from an image file
img_torch = process_image(image_path)
img_torch = img_torch.unsqueeze_(0)
img_torch = img_torch.float()
with torch.no_grad():
output = model.forward(img_torch.cuda())
probability = F.softmax(output.data,dim=1)
return probability.topk(topk)
img = (data_dir + '/test' + '/1/' + 'image_06743.jpg')
val1, val2 = predict(img, model)
print(val1)
print(val2)
###Output
tensor([[0.8794, 0.0772, 0.0278, 0.0038, 0.0034]], device='cuda:0')
tensor([[ 0, 49, 13, 100, 87]], device='cuda:0')
###Markdown
Sanity CheckingNow that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this:You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above.
###Code
# TODO: Display an image along with the top 5 classes
def check_sanity(path):
plt.rcParams["figure.figsize"] = (10,5)
plt.subplot(211)
index = 1
probabilities = predict(path, model)
image = process_image(path)
probabilities = probabilities
axs = imshow(image, ax = plt)
axs.axis('off')
axs.title(cat_to_name[str(index)])
axs.show()
a = np.array((probabilities[0][0]).cpu())
b = [cat_to_name[str(index + 1)] for index in np.array((probabilities[1][0]).cpu())]
N=float(len(b))
fig,ax = plt.subplots(figsize=(8,3))
width = 0.8
tickLocations = np.arange(N)
ax.bar(tickLocations, a, width, linewidth=4.0, align = 'center')
ax.set_xticks(ticks = tickLocations)
ax.set_xticklabels(b)
ax.set_xlim(min(tickLocations)-0.6,max(tickLocations)+0.6)
ax.set_yticks([0.2,0.4,0.6,0.8,1,1.2])
ax.set_ylim((0,1))
ax.yaxis.grid(True)
plt.show()
check_sanity(test_dir + '/1/image_06743.jpg')
###Output
_____no_output_____ |
notebooks/main-unsupervised-sanity-checks.ipynb | ###Markdown
Loading avdsr model with default init
###Code
def avdsr_feature(**kwargs):
kwargs['tag'] = 'Training avDSR based on DQN agents'
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.c = 1
config.optimizer_fn = lambda params: torch.optim.RMSprop(params, 0.002)
config.network_fn = lambda: SRNet(config.action_dim, SRIdentityBody(config.state_dim), hidden_units=(), config=0) #CHECK
# config.network_fn = lambda: SRNetCNN(config.action_dim, SRIdentityBody(config.state_dim),
# hidden_units=(2000,), config=0)
config.replay_fn = lambda: Replay(memory_size=int(4e5), batch_size=10)
config.random_action_prob = LinearSchedule(1, 1, 1e4) # CHECK
config.discount = 0.99
config.target_network_update_freq = 200
config.exploration_steps = 0
# config.double_q = True
config.double_q = False
config.sgd_update_frequency = 4
config.gradient_clip = 5
config.max_steps = 1e1
config.async_actor = False
agent = avDSRAgent(config, config.agents, style='DQN')
return agent
#run_steps function below
config = agent.config
agent_name = agent.__class__.__name__
t0 = time.time()
while True:
if config.log_interval and not agent.total_steps % config.log_interval:
agent.logger.info('steps %d, %.2f steps/s' % (agent.total_steps, config.log_interval / (time.time() - t0)))
t0 = time.time()
if config.max_steps and agent.total_steps >= config.max_steps:
return agent
break
agent.step()
agent.switch_task()
avdsr = avdsr_feature(game='FourRoomsMatrixNoTerm', agents=[], choice=0)
###Output
_____no_output_____
###Markdown
Updating weights of avdsr from saved files
###Code
iters = 300000
weights = torch.load('../storage/01-avdsr.weights', map_location='cpu').state_dict()
# weights = torch.load('../storage/20-'+str(iters)+'-avdsr.weights', map_location='cpu').state_dict()
avdsr.network.load_state_dict(weights,strict=True)
###Output
_____no_output_____
###Markdown
Visualizing the learnt SRs
###Code
from deep_rl.component.fourrooms import *
# import matplotlib
# matplotlib.rc('axes',edgecolor='black')
g = [21, 28, 84, 91]
env = FourRoomsMatrix(layout='4rooms')
state = env.reset(init=g[3])
plt.imshow(env.render(show_goal=False))
plt.axis('off')
plt.savefig('../storage/fig5.4-f.png')
plt.show()
_, out, _ = avdsr.network(tensor(state).unsqueeze(0))
dirs = {0: 'up', 1: 'down', 2:'left', 3:'right'}
plt.figure(dpi=100)
psi = out.detach().cpu().numpy()
for i in range(4):
psi_a = psi[0,i,:]
plt.subplot(2,2,i+1)
plt.imshow(psi_a.reshape((13,13)))
plt.title(dirs[i])
plt.axis('off')
# plt.suptitle('Fine-tuning: '+ str(iters) + ' iterations')
plt.savefig('../storage/fig5.4-j.png')
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the PCA for all values in env
###Code
from deep_rl.component.fourrooms import *
g = [21, 28, 84, 91]
c = np.ones(104)*4
room1 = list(range(5)) + list(range(10,15)) + list(range(20,25)) + list(range(31,36)) +list(range(41,46))
room2 = list(range(5,10)) + list(range(15,20)) + list(range(26,31)) + list(range(36,41)) + list(range(46,51)) + list(range(52,57))
room3 = list(range(57,62)) + list(range(63,68)) + list(range(73,78)) + list(range(83,88)) + list(range(94,99))
connect = [25, 51, 62, 88]
c[room1] = 1
c[room2] = 2
c[room3] = 3
c[connect] = [-1, -1, -1, -1]
env = FourRoomsMatrix(layout='4rooms')
psi_all = np.zeros((104,169*4))
for i in range(104):
state = env.reset(init=i)
_, out, _ = avdsr.network(tensor(state).unsqueeze(0))
psi = out.detach().cpu().numpy()
psi_all[i,:] = psi.flatten()
psi_all.shape
from sklearn.decomposition import PCA
plt.figure(figsize=(6,6),dpi=200)
pca = PCA(n_components=2)
k = pca.fit_transform(psi_all)
plt.scatter(k[:,0],k[:,1], c=c)
plt.xlabel('first principle component', fontsize=14)
plt.ylabel('second principle component', fontsize=14)
# plt.colorbar()
plt.title('Principle components of SFs using PCA', fontsize=14)
plt.savefig('../storage/fig5.4-b.png')
###Output
_____no_output_____
###Markdown
Loading avdsr model with default init
###Code
def avdsr_feature(**kwargs):
kwargs['tag'] = 'Training avDSR based on DQN agents'
generate_tag(kwargs)
kwargs.setdefault('log_level', 0)
config = Config()
config.merge(kwargs)
config.task_fn = lambda: Task(config.game)
config.eval_env = config.task_fn()
config.c = 1
config.optimizer_fn = lambda params: torch.optim.RMSprop(params, 0.002)
config.network_fn = lambda: SRNet(config.action_dim, SRIdentityBody(config.state_dim), hidden_units=(), config=0) #CHECK
# config.network_fn = lambda: SRNetCNN(config.action_dim, SRIdentityBody(config.state_dim),
# hidden_units=(2000,), config=0)
config.replay_fn = lambda: Replay(memory_size=int(4e5), batch_size=10)
config.random_action_prob = LinearSchedule(1, 1, 1e4) # CHECK
config.discount = 0.99
config.target_network_update_freq = 200
config.exploration_steps = 0
# config.double_q = True
config.double_q = False
config.sgd_update_frequency = 4
config.gradient_clip = 5
config.max_steps = 1e1
config.async_actor = False
agent = avDSRAgent(config, config.agents, style='DQN')
return agent
#run_steps function below
config = agent.config
agent_name = agent.__class__.__name__
t0 = time.time()
while True:
if config.log_interval and not agent.total_steps % config.log_interval:
agent.logger.info('steps %d, %.2f steps/s' % (agent.total_steps, config.log_interval / (time.time() - t0)))
t0 = time.time()
if config.max_steps and agent.total_steps >= config.max_steps:
return agent
break
agent.step()
agent.switch_task()
avdsr = avdsr_feature(game='FourRoomsMatrixNoTerm', agents=[], choice=0)
###Output
_____no_output_____
###Markdown
Updating weights of avdsr from saved files
###Code
iters = 300000
weights = torch.load('../storage/01-avdsr.weights', map_location='cpu').state_dict()
# weights = torch.load('../storage/20-'+str(iters)+'-avdsr.weights', map_location='cpu').state_dict()
avdsr.network.load_state_dict(weights,strict=True)
###Output
_____no_output_____
###Markdown
Visualizing the learnt SRs
###Code
from deep_rl.component.fourrooms import *
# import matplotlib
# matplotlib.rc('axes',edgecolor='black')
g = [21, 28, 84, 91]
env = FourRoomsMatrix(layout='4rooms')
state = env.reset(init=g[3])
plt.imshow(env.render(show_goal=False))
plt.axis('off')
plt.savefig('../storage/fig5.4-f.png')
plt.show()
_, out, _ = avdsr.network(tensor(state).unsqueeze(0))
dirs = {0: 'up', 1: 'down', 2:'left', 3:'right'}
plt.figure(dpi=100)
psi = out.detach().cpu().numpy()
for i in range(4):
psi_a = psi[0,i,:]
plt.subplot(2,2,i+1)
plt.imshow(psi_a.reshape((13,13)))
plt.title(dirs[i])
plt.axis('off')
# plt.suptitle('Fine-tuning: '+ str(iters) + ' iterations')
plt.savefig('../storage/fig5.4-j.png')
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the PCA for all values in env
###Code
from deep_rl.component.fourrooms import *
g = [21, 28, 84, 91]
c = np.ones(104)*4
room1 = list(range(5)) + list(range(10,15)) + list(range(20,25)) + list(range(31,36)) +list(range(41,46))
room2 = list(range(5,10)) + list(range(15,20)) + list(range(26,31)) + list(range(36,41)) + list(range(46,51)) + list(range(52,57))
room3 = list(range(57,62)) + list(range(63,68)) + list(range(73,78)) + list(range(83,88)) + list(range(94,99))
connect = [25, 51, 62, 88]
c[room1] = 1
c[room2] = 2
c[room3] = 3
c[connect] = [-1, -1, -1, -1]
env = FourRoomsMatrix(layout='4rooms')
psi_all = np.zeros((104,169*4))
for i in range(104):
state = env.reset(init=i)
_, out, _ = avdsr.network(tensor(state).unsqueeze(0))
psi = out.detach().cpu().numpy()
psi_all[i,:] = psi.flatten()
psi_all.shape
from sklearn.decomposition import PCA
plt.figure(figsize=(6,6),dpi=200)
pca = PCA(n_components=2)
k = pca.fit_transform(psi_all)
plt.scatter(k[:,0],k[:,1], c=c)
plt.xlabel('first principle component', fontsize=14)
plt.ylabel('second principle component', fontsize=14)
# plt.colorbar()
plt.title('Principle components of SFs using PCA', fontsize=14)
plt.savefig('../storage/fig5.4-b.png')
###Output
_____no_output_____ |
Model_Training/Code.ipynb | ###Markdown
Importing Libraries
###Code
# Basic File handling
import pandas as pd
import numpy as np
# Import Data from URL
import io
import requests
# Data Visualization
import seaborn as sns
import matplotlib.pyplot as plt
# Pre-Processing
from sklearn import preprocessing
label_encoder = preprocessing.LabelEncoder()
# Splitting data for Model Training
from sklearn.model_selection import train_test_split
# Training Model
from sklearn.ensemble import RandomForestClassifier
# Random Forest Classifier was chosen because it was most Accurate
# Model Evaluation : Accuracy Score
from sklearn.metrics import accuracy_score
# Export Trained Model as *.pkl
import joblib
###Output
_____no_output_____
###Markdown
Data Pre-Processing---
###Code
# Getting the Data Set in the Program
url = 'https://raw.githubusercontent.com/iSiddharth20/Predictive-Analysis-for-Machine-Faliure/master/dataset.csv'
s = requests.get(url).content
data = pd.read_csv(io.StringIO(s.decode('utf-8')))
###Output
_____no_output_____
###Markdown
--- Identifing and Removing Null Values from : Pressure , Moisture , Temperature , Broken---
###Code
# Identifing Null Values from "Pressure"
data[data['pressureInd'].isnull()]
# Identifing Null Values from "Moisture"
data[data['moistureInd'].isnull()]
# Identifing Null Values from "Temperature"
data[data['temperatureInd'].isnull()]
# Identifing Null Values from "Broken"
data[data['broken'].isnull()]
# Removing all the Null Values and Resetting Index of the Data Frame
data.dropna(inplace=True)
data.reset_index(inplace=True)
data.drop(columns=['index'],axis=1,inplace=True)
# First 5 Rows of the Data
data.head()
# Check for Null Values
data.isnull().sum()
###Output
_____no_output_____
###Markdown
**** Observations :Filling the Null Values with Mean reduced the Accuracy. Hence, those entries were Dropped.*** --- Identifying and Removing Outliers---
###Code
# Identify Outliers from column 'Pressure'
sns.boxplot(x = data['pressureInd'])
# Identify Outliers from column 'Temperature'
sns.boxplot(x = data['temperatureInd'])
# Identify Outliers from column 'Moisture'
sns.boxplot(x = data['moistureInd'])
'''
Removing Outliers using InterQuartile Range
Q1 : First Quartile
Q2 : Second Quartile
IQR : Inter Quartile Range
Only data points within the Inter Quartile Range will be stored
'''
# Finding Key Values
Q1 = data.quantile(0.25)
Q3 = data.quantile(0.75)
IQR = Q3 - Q1
# Selecting Valid Data Points
data = data[~((data < (Q1 - 1.5 * IQR)) |(data > (Q3 + 1.5 * IQR))).any(axis=1)]
# Resetting Index of Data Frame
data.reset_index(inplace=True)
data.drop(columns=['index'],axis=1,inplace=True)
###Output
_____no_output_____
###Markdown
--- Checking for Remaining Outliers---
###Code
# From column 'Pressure'
sns.boxplot(x = data['pressureInd'])
# From column 'Temperature'
sns.boxplot(x = data['temperatureInd'])
# From column 'Moisture'
sns.boxplot(x = data['moistureInd'])
###Output
_____no_output_____
###Markdown
***Observations : Removing the Outliers subsequently increased Accuracy *** Data Visualization--- --- Visualizing Distribution of Data Key Factors : Pressure , Moisture , Temperature---
###Code
# Distribution Plot of "Pressure"
sns.distplot(data['pressureInd'])
# Distribution Plot of "Moisture"
sns.distplot(data['moistureInd'])
# Distribution Plot of "Temperature"
sns.distplot(data['temperatureInd'])
###Output
_____no_output_____
###Markdown
**** Observation :The Data appears to be Normally Distributed which means predictions will be precise*** --- Observing "Yes" vs "No" ratio in Target variable : "Broken"---
###Code
sns.countplot(data=data,x=data['broken'])
###Output
_____no_output_____
###Markdown
***Observation : There appear to be more 'No' values than 'Yes' Values*** --- Analyzing Dependency of Factors over our Target Variable : State of Machine : "Broken" Using Scatter Plot---
###Code
# Pressure and Lifetime
plt.figure(figsize=(10,4))
sns.scatterplot(data=data,x=data['lifetime'],y=data['pressureInd'],hue=data['broken'])
# Moisture and Lifetime
plt.figure(figsize=(10,4))
sns.scatterplot(data=data,x=data['lifetime'],y=data['moistureInd'],hue=data['broken'])
# Temperature and Lifetime
plt.figure(figsize=(10,4))
sns.scatterplot(data=data,x=data['lifetime'],y=data['temperatureInd'],hue=data['broken'])
###Output
_____no_output_____
###Markdown
Machine Learning Model---
###Code
# Encoding Values to Unique Integers to aid Mathematical Calculations
data['broken'] = label_encoder.fit_transform(data['broken'])
data['broken'].unique()
'''
X : Features and Dependent Variables
y : Target Variable
'''
X = data.drop('broken',axis = 1)
Y = data['broken']
# Splitting the Data for Training and Testing in 70:30 Ratio
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size = 0.3,random_state = 2)
###Output
_____no_output_____
###Markdown
**** Observation : Ratio of 70:30 is the most optimum, 80:20 ratio overfitted the model*** --- Fitting and Training the Model---
###Code
model = RandomForestClassifier(max_depth=9, random_state=0)
model.fit(X_train,Y_train)
###Output
_____no_output_____
###Markdown
**** Observation : "Random Forest" was chosen over other techniques because it gave maximum Accuracy* Observation : Maximum Gepth pver 10 overfitted the Model.***
###Code
# Testing the Model
Y_pred = model.predict(X_test)
###Output
_____no_output_____
###Markdown
--- Checking Accuracy ---
###Code
acc = round(accuracy_score(Y_test,Y_pred)*100,3)
print('Accuracy : ',acc,' %')
###Output
Accuracy : 95.238 %
###Markdown
**** Observation : Accuracy is always more than 92% , averaging at 95%*** Exporting the Trained Model ---
###Code
# Exporting Trained model as 'trained_model.pkl'
joblib.dump(model,"trained_model.pkl")
###Output
_____no_output_____ |
00_PyTorchBasics/01_SaveLoadModel.ipynb | ###Markdown
SAVING AND LOADING MODELS This document provides solutions to a variety of use cases regarding the saving and loading of PyTorch models. Feel free to read the whole document, or just skip to the code you need for a desired use case. When it comes to saving and loading models, there are three core functions to be familiar with:- `torch.save`: Saves a serialized object to disk. This function uses Python’s `pickle` utility for serialization. Models, tensors, and dictionaries of all kinds of objects can be saved using this function.- `torch.load`: Uses `pickle`’s unpickling facilities to deserialize pickled object files to memory. This function also facilitates the device to load the data into (see Saving & Loading Model Across Devices).- `torch.nn.Module.load_state_dict`: Loads a model’s parameter dictionary using a deserialized state_dict. For more information on state_dict, see What is a state_dict?. Contents:- What is a state_dict?- Saving & Loading Model for Inference- Saving & Loading a General Checkpoint- Saving Multiple Models in One File- Warmstarting Model Using Parameters from a Different Model- Saving & Loading Model Across Devices What is a state_dict? In PyTorch, the learnable parameters (i.e. weights and biases) of an `torch.nn.Module` model are contained in the model’s parameters (accessed with `model.parameters()`). A `state_dict` is simply a Python dictionary object that maps each layer to its parameter tensor. Note that only layers with learnable parameters (convolutional layers, linear layers, etc.) and registered buffers (batchnorm’s running_mean) have entries in the model’s state_dict. Optimizer objects (torch.optim) also have a state_dict, which contains information about the optimizer’s state, as well as the hyperparameters used.Because `state_dict` objects are Python dictionaries, they can be easily saved, updated, altered, and restored, adding a great deal of modularity to PyTorch models and optimizers. Example: Let’s take a look at the `state_dict` from the simple model used in the Training a classifier tutorial.
###Code
import torch
import torch.nn as nn
import torch.optim as optim
# Define model
class ModelClass(nn.Module):
def __init__(self):
super(ModelClass, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
# Initialize model
model = ModelClass()
# Initialize optimizer
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
# Print model's state_dict
print("Model's state_dict:")
for param_tensor in model.state_dict():
print(param_tensor, "\t", model.state_dict()[param_tensor].size())
# Print optimizer's state_dict
print("Optimizer's state_dict:")
for var_name in optimizer.state_dict():
print(var_name, "\t", optimizer.state_dict()[var_name])
model.state_dict()
###Output
_____no_output_____
###Markdown
Saving & Loading Model for InferenceSave/Load state_dict (Recommended) Save:`torch.save(model.state_dict(), PATH)` Load:``model = ModelClass(*args, **kwargs)````model.load_state_dict(torch.load(PATH))```model.eval()`
###Code
PATH = "../../../../MEGA/DatabaseLocal/myNet.pt"
torch.save(model.state_dict(), PATH)
###Output
_____no_output_____
###Markdown
**When saving a model for inference**, it is only necessary to save the trained model’s learned parameters. Saving the model’s state_dict with the torch.save() function will give you the most flexibility for restoring the model later, which is why it is the recommended method for saving models.A common PyTorch convention is to save models using either a .pt or .pth file extension.Remember that you must call model.eval() to set dropout and batch normalization layers to evaluation mode before running inference. Failing to do this will yield inconsistent inference results. NOTENotice that the load_state_dict() function takes a dictionary object, NOT a path to a saved object. This means that you must deserialize the saved state_dict before you pass it to the load_state_dict() function. For example, you CANNOT load using model.load_state_dict(PATH).
###Code
model = ModelClass()
model.load_state_dict(torch.load(PATH))
model.eval()
###Output
_____no_output_____
###Markdown
Save/Load Entire Model Save:torch.save(model, PATH)
###Code
torch.save(model, PATH)
###Output
_____no_output_____
###Markdown
Load: Model class must be defined somewheremodel = torch.load(PATH)model.eval()
###Code
model = torch.load(PATH)
model.eval()
###Output
_____no_output_____
###Markdown
This save/load process uses the most intuitive syntax and involves the least amount of code. Saving a model in this way will save the entire module using Python’s pickle module. **The disadvantage of this approach is that the serialized data is bound to the specific classes and the exact directory structure used when the model is saved**. The reason for this is because pickle does not save the model class itself. Rather, it saves a path to the file containing the class, which is used during load time. Because of this, your code can break in various ways when used in other projects or after refactors.A common PyTorch convention is to save models using either a .pt or .pth file extension.Remember that you must call model.eval() to set dropout and batch normalization layers to evaluation mode before running inference. Failing to do this will yield inconsistent inference results. Saving & Loading a General Checkpoint for Inference and/or Resuming Training Save:
###Code
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
...
}, PATH)
###Output
_____no_output_____
###Markdown
Load:
###Code
model = ModelClass()
optimizer = TheOptimizerClass(*args, **kwargs)
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
model.eval()
# - or -
model.train()
###Output
_____no_output_____ |
_source/raw/rec_algo_ncf_mxnet_d2ai.ipynb | ###Markdown
The following additional libraries are needed to run thisnotebook. Note that running on Colab is experimental, please report a Githubissue if you have any problem.
###Code
!pip install -U mxnet-cu101==1.7.0
!pip install d2l==0.16.2
###Output
_____no_output_____
###Markdown
Neural Collaborative Filtering for Personalized RankingThis section moves beyond explicit feedback, introducing the neural collaborative filtering (NCF) framework for recommendation with implicit feedback. Implicit feedback is pervasive in recommender systems. Actions such as Clicks, buys, and watches are common implicit feedback which are easy to collect and indicative of users' preferences. The model we will introduce, titled NeuMF :cite:`He.Liao.Zhang.ea.2017`, short for neural matrix factorization, aims to address the personalized ranking task with implicit feedback. This model leverages the flexibility and non-linearity of neural networks to replace dot products of matrix factorization, aiming at enhancing the model expressiveness. In specific, this model is structured with two subnetworks including generalized matrix factorization (GMF) and MLP and models the interactions from two pathways instead of simple inner products. The outputs of these two networks are concatenated for the final prediction scores calculation. Unlike the rating prediction task in AutoRec, this model generates a ranked recommendation list to each user based on the implicit feedback. We will use the personalized ranking loss introduced in the last section to train this model. The NeuMF modelAs aforementioned, NeuMF fuses two subnetworks. The GMF is a generic neural network version of matrix factorization where the input is the elementwise product of user and item latent factors. It consists of two neural layers:$$\mathbf{x} = \mathbf{p}_u \odot \mathbf{q}_i \\\hat{y}_{ui} = \alpha(\mathbf{h}^\top \mathbf{x}),$$where $\odot$ denotes the Hadamard product of vectors. $\mathbf{P} \in \mathbb{R}^{m \times k}$ and $\mathbf{Q} \in \mathbb{R}^{n \times k}$ corespond to user and item latent matrix respectively. $\mathbf{p}_u \in \mathbb{R}^{ k}$ is the $u^\mathrm{th}$ row of $P$ and $\mathbf{q}_i \in \mathbb{R}^{ k}$ is the $i^\mathrm{th}$ row of $Q$. $\alpha$ and $h$ denote the activation function and weight of the output layer. $\hat{y}_{ui}$ is the prediction score of the user $u$ might give to the item $i$.Another component of this model is MLP. To enrich model flexibility, the MLP subnetwork does not share user and item embeddings with GMF. It uses the concatenation of user and item embeddings as input. With the complicated connections and nonlinear transformations, it is capable of estimating the intricate interactions between users and items. More precisely, the MLP subnetwork is defined as:$$\begin{aligned}z^{(1)} &= \phi_1(\mathbf{U}_u, \mathbf{V}_i) = \left[ \mathbf{U}_u, \mathbf{V}_i \right] \\\phi^{(2)}(z^{(1)}) &= \alpha^1(\mathbf{W}^{(2)} z^{(1)} + b^{(2)}) \\&... \\\phi^{(L)}(z^{(L-1)}) &= \alpha^L(\mathbf{W}^{(L)} z^{(L-1)} + b^{(L)})) \\\hat{y}_{ui} &= \alpha(\mathbf{h}^\top\phi^L(z^{(L-1)}))\end{aligned}$$where $\mathbf{W}^*, \mathbf{b}^*$ and $\alpha^*$ denote the weight matrix, bias vector, and activation function. $\phi^*$ denotes the function of the corresponding layer. $\mathbf{z}^*$ denotes the output of corresponding layer.To fuse the results of GMF and MLP, instead of simple addition, NeuMF concatenates the second last layers of two subnetworks to create a feature vector which can be passed to the further layers. Afterwards, the ouputs are projected with matrix $\mathbf{h}$ and a sigmoid activation function. The prediction layer is formulated as:$$\hat{y}_{ui} = \sigma(\mathbf{h}^\top[\mathbf{x}, \phi^L(z^{(L-1)})]).$$The following figure illustrates the model architecture of NeuMF.![Illustration of the NeuMF model](https://github.com/d2l-ai/d2l-en-colab/blob/master/img/rec-neumf.svg?raw=1)
###Code
import random
import mxnet as mx
from mxnet import autograd, gluon, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l
npx.set_np()
###Output
_____no_output_____
###Markdown
Model ImplementationThe following code implements the NeuMF model. It consists of a generalized matrix factorization model and a multi-layered perceptron with different user and item embedding vectors. The structure of the MLP is controlled with the parameter `nums_hiddens`. ReLU is used as the default activation function.
###Code
class NeuMF(nn.Block):
def __init__(self, num_factors, num_users, num_items, nums_hiddens,
**kwargs):
super(NeuMF, self).__init__(**kwargs)
self.P = nn.Embedding(num_users, num_factors)
self.Q = nn.Embedding(num_items, num_factors)
self.U = nn.Embedding(num_users, num_factors)
self.V = nn.Embedding(num_items, num_factors)
self.mlp = nn.Sequential()
for num_hiddens in nums_hiddens:
self.mlp.add(
nn.Dense(num_hiddens, activation='relu', use_bias=True))
self.prediction_layer = nn.Dense(1, activation='sigmoid',
use_bias=False)
def forward(self, user_id, item_id):
p_mf = self.P(user_id)
q_mf = self.Q(item_id)
gmf = p_mf * q_mf
p_mlp = self.U(user_id)
q_mlp = self.V(item_id)
mlp = self.mlp(np.concatenate([p_mlp, q_mlp], axis=1))
con_res = np.concatenate([gmf, mlp], axis=1)
return self.prediction_layer(con_res)
###Output
_____no_output_____
###Markdown
Customized Dataset with Negative SamplingFor pairwise ranking loss, an important step is negative sampling. For each user, the items that a user has not interacted with are candidate items (unobserved entries). The following function takes users identity and candidate items as input, and samples negative items randomly for each user from the candidate set of that user. During the training stage, the model ensures that the items that a user likes to be ranked higher than items he dislikes or has not interacted with.
###Code
class PRDataset(gluon.data.Dataset):
def __init__(self, users, items, candidates, num_items):
self.users = users
self.items = items
self.cand = candidates
self.all = set([i for i in range(num_items)])
def __len__(self):
return len(self.users)
def __getitem__(self, idx):
neg_items = list(self.all - set(self.cand[int(self.users[idx])]))
indices = random.randint(0, len(neg_items) - 1)
return self.users[idx], self.items[idx], neg_items[indices]
###Output
_____no_output_____
###Markdown
EvaluatorIn this section, we adopt the splitting by time strategy to construct the training and test sets. Two evaluation measures including hit rate at given cutting off $\ell$ ($\text{Hit}@\ell$) and area under the ROC curve (AUC) are used to assess the model effectiveness. Hit rate at given position $\ell$ for each user indicates that whether the recommended item is included in the top $\ell$ ranked list. The formal definition is as follows:$$\text{Hit}@\ell = \frac{1}{m} \sum_{u \in \mathcal{U}} \textbf{1}(rank_{u, g_u} <= \ell),$$where $\textbf{1}$ denotes an indicator function that is equal to one if the ground truth item is ranked in the top $\ell$ list, otherwise it is equal to zero. $rank_{u, g_u}$ denotes the ranking of the ground truth item $g_u$ of the user $u$ in the recommendation list (The ideal ranking is 1). $m$ is the number of users. $\mathcal{U}$ is the user set.The definition of AUC is as follows:$$\text{AUC} = \frac{1}{m} \sum_{u \in \mathcal{U}} \frac{1}{|\mathcal{I} \backslash S_u|} \sum_{j \in I \backslash S_u} \textbf{1}(rank_{u, g_u} < rank_{u, j}),$$where $\mathcal{I}$ is the item set. $S_u$ is the candidate items of user $u$. Note that many other evaluation protocols such as precision, recall and normalized discounted cumulative gain (NDCG) can also be used.The following function calculates the hit counts and AUC for each user.
###Code
#@save
def hit_and_auc(rankedlist, test_matrix, k):
hits_k = [(idx, val) for idx, val in enumerate(rankedlist[:k])
if val in set(test_matrix)]
hits_all = [(idx, val) for idx, val in enumerate(rankedlist)
if val in set(test_matrix)]
max = len(rankedlist) - 1
auc = 1.0 * (max - hits_all[0][0]) / max if len(hits_all) > 0 else 0
return len(hits_k), auc
###Output
_____no_output_____
###Markdown
Then, the overall Hit rate and AUC are calculated as follows.
###Code
#@save
def evaluate_ranking(net, test_input, seq, candidates, num_users, num_items,
devices):
ranked_list, ranked_items, hit_rate, auc = {}, {}, [], []
all_items = set([i for i in range(num_users)])
for u in range(num_users):
neg_items = list(all_items - set(candidates[int(u)]))
user_ids, item_ids, x, scores = [], [], [], []
[item_ids.append(i) for i in neg_items]
[user_ids.append(u) for _ in neg_items]
x.extend([np.array(user_ids)])
if seq is not None:
x.append(seq[user_ids, :])
x.extend([np.array(item_ids)])
test_data_iter = gluon.data.DataLoader(gluon.data.ArrayDataset(*x),
shuffle=False,
last_batch="keep",
batch_size=1024)
for index, values in enumerate(test_data_iter):
x = [
gluon.utils.split_and_load(v, devices, even_split=False)
for v in values]
scores.extend([list(net(*t).asnumpy()) for t in zip(*x)])
scores = [item for sublist in scores for item in sublist]
item_scores = list(zip(item_ids, scores))
ranked_list[u] = sorted(item_scores, key=lambda t: t[1], reverse=True)
ranked_items[u] = [r[0] for r in ranked_list[u]]
temp = hit_and_auc(ranked_items[u], test_input[u], 50)
hit_rate.append(temp[0])
auc.append(temp[1])
return np.mean(np.array(hit_rate)), np.mean(np.array(auc))
###Output
_____no_output_____
###Markdown
Training and Evaluating the ModelThe training function is defined below. We train the model in the pairwise manner.
###Code
#@save
def train_ranking(net, train_iter, test_iter, loss, trainer, test_seq_iter,
num_users, num_items, num_epochs, devices, evaluator,
candidates, eval_step=1):
timer, hit_rate, auc = d2l.Timer(), 0, 0
animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs], ylim=[0, 1],
legend=['test hit rate', 'test AUC'])
for epoch in range(num_epochs):
metric, l = d2l.Accumulator(3), 0.
for i, values in enumerate(train_iter):
input_data = []
for v in values:
input_data.append(gluon.utils.split_and_load(v, devices))
with autograd.record():
p_pos = [net(*t) for t in zip(*input_data[0:-1])]
p_neg = [
net(*t) for t in zip(*input_data[0:-2], input_data[-1])]
ls = [loss(p, n) for p, n in zip(p_pos, p_neg)]
[l.backward(retain_graph=False) for l in ls]
l += sum([l.asnumpy() for l in ls]).mean() / len(devices)
trainer.step(values[0].shape[0])
metric.add(l, values[0].shape[0], values[0].size)
timer.stop()
with autograd.predict_mode():
if (epoch + 1) % eval_step == 0:
hit_rate, auc = evaluator(net, test_iter, test_seq_iter,
candidates, num_users, num_items,
devices)
animator.add(epoch + 1, (hit_rate, auc))
print(f'train loss {metric[0] / metric[1]:.3f}, '
f'test hit rate {float(hit_rate):.3f}, test AUC {float(auc):.3f}')
print(f'{metric[2] * num_epochs / timer.sum():.1f} examples/sec '
f'on {str(devices)}')
###Output
_____no_output_____
###Markdown
Now, we can load the MovieLens 100k dataset and train the model. Since there are only ratings in the MovieLens dataset, with some losses of accuracy, we binarize these ratings to zeros and ones. If a user rated an item, we consider the implicit feedback as one, otherwise as zero. The action of rating an item can be treated as a form of providing implicit feedback. Here, we split the dataset in the `seq-aware` mode where users' latest interacted items are left out for test.
###Code
batch_size = 1024
df, num_users, num_items = d2l.read_data_ml100k()
train_data, test_data = d2l.split_data_ml100k(df, num_users, num_items,
'seq-aware')
users_train, items_train, ratings_train, candidates = d2l.load_data_ml100k(
train_data, num_users, num_items, feedback="implicit")
users_test, items_test, ratings_test, test_iter = d2l.load_data_ml100k(
test_data, num_users, num_items, feedback="implicit")
train_iter = gluon.data.DataLoader(
PRDataset(users_train, items_train, candidates, num_items), batch_size,
True, last_batch="rollover", num_workers=d2l.get_dataloader_workers())
###Output
_____no_output_____
###Markdown
We then create and initialize the model. we use a three-layer MLP with constant hidden size 10.
###Code
devices = d2l.try_all_gpus()
net = NeuMF(10, num_users, num_items, nums_hiddens=[10, 10, 10])
net.initialize(ctx=devices, force_reinit=True, init=mx.init.Normal(0.01))
###Output
_____no_output_____
###Markdown
The following code trains the model.
###Code
lr, num_epochs, wd, optimizer = 0.01, 10, 1e-5, 'adam'
loss = d2l.BPRLoss()
trainer = gluon.Trainer(net.collect_params(), optimizer, {
"learning_rate": lr,
'wd': wd})
train_ranking(net, train_iter, test_iter, loss, trainer, None, num_users,
num_items, num_epochs, devices, evaluate_ranking, candidates)
###Output
train loss 16.982, test hit rate 0.075, test AUC 0.531
11.5 examples/sec on [gpu(0), gpu(1)]
|
notes/E-Boolean_Variables_and_If_Then_Else_Statements.ipynb | ###Markdown
Primitive Data Types: BooleansThese are the basic data types that constitute all of the more complex data structures in python. The basic data types are the following:* Strings (for text)* Numeric types (integers and decimals)* Booleans BooleansBooleans represent the truth or success of a statement, and are commonly used for branching and checking status in code.They can take two values: `True` or `False`.
###Code
bool_1 = True
bool_2 = False
print(bool_1)
print(bool_2)
###Output
_____no_output_____
###Markdown
If you remember from our strings session, we could execute a command that checks in a string appears within another. For example:
###Code
lookfor = "Trump"
text = """Three American prisoners freed from North Korea arrived here early
Thursday to a personal welcome from President Trump, who traveled to an air
base in the middle of the night to meet them."""
trump_in_text = lookfor in text
print("Does Trump appear in the text?", trump_in_text)
###Output
_____no_output_____
###Markdown
Boolean Operations:Frequently, one wants to combine or modify boolean values. Python has several operations for just this purpose:+ `not a`: returns the opposite value of `a`.+ `a and b`: returns true if and only if both `a` and `b` are true.+ `a or b`: returns true either `a` or `b` are true, or both.See LPTHW [Exercise 27](http://learnpythonthehardway.org/book/ex27.html) Like mathematical expressions, boolean expressions can be nested using parentheses.
###Code
var1 = 5
var2 = 6
var3 = 7
###Output
_____no_output_____
###Markdown
Consider the outcomes of the following examples
###Code
print (var1 + var2 == 11)
print (var2 + var3 == 13)
print (var1 + var2 == 11 and var2 + var3 == 13)
print (var1 + var2 == 12 and var2 + var3 == 13)
print (var1 + var2 == 12 or var2 + var3 == 13)
print ( (not var1 + var2 == 12) or ( var2 + var3 == 14) )
###Output
_____no_output_____
###Markdown
ExerciseComplete Exercises 1-12 in [28](http://learnpythonthehardway.org/book/ex28.html) at LPTHW. You can find them also below. Try to find the outcome before executing the cell.
###Code
#1
True and True
#2
False and True
#3
1 == 1 and 2 == 1
#4
"test" == "test"
#5
1 == 1 or 2 != 1
#6
True and 1 == 1
#7
False and 0 != 0
#8
True or 1 == 1
#9
"test" == "testing"
#10
1 != 0 and 2 == 1
#11
"test" != "testing"
#12
"test" == 1
###Output
_____no_output_____
###Markdown
Now Complete Exercises 12-20 in [28](http://learnpythonthehardway.org/book/ex28.html). But this time let's examine how to evaluate these expressions on a step by step basis.
###Code
#13
not (True and False)
#14
not (1 == 1 and 0 != 1)
#15
not (10 == 1 or 1000 == 1000)
#16
not (1 != 10 or 3 == 4)
#17
not ("testing" == "testing" and "Zed" == "Cool Guy")
#18
1 == 1 and (not ("testing" == 1 or 1 == 0))
#19
"chunky" == "bacon" and (not (3 == 4 or 3 == 3))
#20
3 == 3 and (not ("testing" == "testing" or "Python" == "Fun"))
#bonus
3 != 4 and not ("testing" != "test" or "Python" == "Python")
###Output
_____no_output_____
###Markdown
ExerciseNow let's try to write the boolean expressions that will evaluate different conditions, given a set of other variables.
###Code
age = 18
# You need to be above 21 yo
can_drink_alcohol = False # your code here, replace "False" with an expression
print(f"Age: {age}; can drink alcohol? {can_drink_alcohol}" )
age = 18
# You need to be above 16 yo
can_get_driving_license = False # your code here, replace "False" with an expression
print(f"Age: {age}; can get driving license? {can_get_driving_license}" )
us_citizen = True
# You need to be a US Citizen
can_get_us_passport = False # your code here, replace "False" with an expression
print(f"US Citizen: {us_citizen}; can get US passport? {can_get_us_passport}" )
# You need to be above 18 and a US Citizen
age = 18
us_citizen = True
can_vote = False # your code here, replace "False" with an expression
print(f"US Citizen: {us_citizen}; Age: {age}\nCan Vote? {can_vote}" )
# You need to be above 35, a US Citizen, and born in the US
age = 70
born_in_us = True
us_citizen = True
can_be_president = False # your code here, replace "False" with an expression
print("US Citizen: {us_citizen}; Age: {age}; Born in US? {born_in_us}\nCan be president? {can_be_president}" )
# Can you become citizen?
# You qualify for a citizen if any of the following holds
# * Your parents are US Citizens and you are under 18
# * You have been born in the US
age = 19
parents_citizens = False
born_in_us = True
citizen_eligible = False # your code here, replace "False" with an expression
print("Citizen parents: {parents_citizens}; Age: {age}; Born in US? {born_in_us}\nEligible for Citizen? {citizen_eligible}" )
###Output
_____no_output_____
###Markdown
Control Structures: if statementsTraversing over data and making decisions based upon data are a common aspect of every programming language, known as control flow. Python provides a rich control flow, with a lot of conveniences for the power users. Here, we're just going to talk about the basics, to learn more, please [consult the documentation](http://docs.python.org/2/tutorial/controlflow.html). A common theme throughout this discussion of control structures is the notion of a "block of code." Blocks of code are **demarcated by a specific level of indentation**, typically separated from the surrounding code by some control structure elements, immediately preceeded by a colon, `:`. We'll see examples below. Finally, note that control structures can be nested arbitrarily, depending on the tasks you're trying to accomplish. if Statements:**See also LPTHW, Exp 29, 30, and 31.**If statements are perhaps the most widely used of all control structures. An if statement consists of a code block and an argument. The if statement evaluates the boolean value of it's argument, executing the code block if that argument is true.
###Code
execute = False
if execute:
print("Of course!")
print("This will execute as well")
execute = False
if execute:
print("Me? Nobody?")
print("Really? Nobody?")
print("I am not nested, I will show up!")
###Output
_____no_output_____
###Markdown
And here is an `if` statement paired with an `else`.
###Code
lookfor = "Trump"
text = "Three American prisoners freed from North Korea arrived here early Thursday to a personal welcome from President Trump, who traveled to an air base in the middle of the night to meet them."
if lookfor in text:
print(lookfor, "appears in the text")
else:
print(lookfor, "does not appear in the text")
lookfor = "Obama"
text = "Three American prisoners freed from North Korea arrived here early Thursday to a personal welcome from President Trump, who traveled to an air base in the middle of the night to meet them."
if lookfor in text:
print(lookfor, "appears in the text")
else:
print(lookfor, "does not appear in the text")
###Output
_____no_output_____
###Markdown
Each argument in the above if statements is a boolean expression. Often you want to have alternatives, blocks of code that get evaluated in the event that the argument to an if statement is false. This is where **`elif`** (else if) and else come in. An **`elif`** is evaluated if all preceding if or elif arguments have evaluated to false. The else statement is the last resort, assigning the code that gets executed if no if or elif above it is true. These statements are optional, and can be added to an if statement in any order, with at most one code block being evaluated. An else will always have it's code be executed, if nothing above it is true.
###Code
status = 'Senior'
if status == 'Freshman':
print("Hello newbie!")
print("How's college treating you?")
elif status == 'Sophomore':
print("Welcome back!")
elif status == 'Junior':
print("Almost there, almost there")
elif status == 'Senior':
print("You can drink now! You will need it.")
elif status == 'Senior':
print("The secret of life is 42. But you will never see this")
else:
print("Are you a graduate student? Or (gasp!) faculty?")
###Output
_____no_output_____
###Markdown
Exercise * You need to be 21 years old and above to drink alcohol. Write a conditional expression that checks the age, and prints out whether the person is allowed to drink alcohol.
###Code
age = 20
if age >= 21:
print("You are above 21, you can drink")
else:
print("You are too young. Wait for", 21-age, "years")
age = 18
if age>=21:
print("You are above 21, you can drink")
else:
print("You are too young. Wait for", 21-age, "years")
###Output
_____no_output_____
###Markdown
* You need to be 16 years old and above to drive. If you are above 16, you also need to have a driving license. Write a conditional expression that checks the age and prints out: (a) whether the person is too young to drive, (b) whether the person satisfies the age criteria but needs a driving license, or (c) the person can drive.
###Code
age = 18
has_driving_license = False
if age<16:
print("You are too young to drive")
else:
if has_driving_license:
print("You can drive")
else:
print("You are old enough to drive, but you need a driving license")
age = 15
has_driving_license = True
if age >= 16 and has_driving_license:
print("You can drive")
elif age >= 16 and not has_driving_license:
print("You are old enough to drive, but you need a driving license")
else:
print("You are too young to drive")
age = 18
has_driving_license = False
if age<16:
print("You are too young to drive")
else:
if has_driving_license:
print("You can drive")
else:
print("You are old enough to drive, but you need a driving license")
age = 18
has_driving_license = False
if age>=16 and has_driving_license:
print("You can drive")
elif age>=16 and not has_driving_license:
print("You are old enough to drive, but you need a driving license")
else:
print("You are too young to drive")
###Output
_____no_output_____
###Markdown
* You need to be above 18 and a US Citizen, to be able to vote. You also need to be registered. Write the conditional expression that checks for these conditions and prints out whether the person can vote. If the person cannot vote, the code should print out the reason (below 18, or not a US citizen, or not registered, or a combination thereof).
###Code
age = 15
us_citizen = False
registered = True
if age >= 18 and us_citizen and registered:
print("You can vote")
else:
print("You cannot vote")
# Now let's explain the reason
if age < 18:
print("You are below 18")
if not us_citizen:
print("You are not a US citizen")
if not registered:
print("You are not registered")
age = 15
us_citizen = False
registered = True
if age >= 18 and us_citizen and registered:
print("You can vote")
else:
print("You cannot vote")
if age<18:
print("You are below 18")
if not us_citizen:
print("You are not a US citizen")
if not registered:
print("You are not registered")
###Output
_____no_output_____
###Markdown
* You qualify for US citizen if any of the following holds: (a) Your parents are US Citizens and you are under 18, (b) You have been born in the US. Write the conditional expression that checks if a person is eligible to become a US citizen. If the person is not eligible, the code should print out the reason.
###Code
age = 15
parents_citizens = False
born_in_us = False
if (age<18 and parents_citizens) or born_in_us:
print("You can become a US citizen")
else: # none of the two conditions around the or were True
print("You cannot become a US citizen")
if not born_in_us:
print("You were not born in the US")
if not (age<18 and parents_citizens):
print("You need to be below 18 and your parents need to be citizens")
age = 16
parents_citizens = True
born_in_us = False
if (age<18 and parents_citizens) or born_in_us:
print("You can become a US citizen")
else: # none of the conditions were true
if not born_in_us:
print("You were not born in the US")
if not (age<18 and parents_citizens):
print("You need to be below 18 and your parents need to be citizens")
###Output
_____no_output_____
###Markdown
Primitive Data Types: BooleansThese are the basic data types that constitute all of the more complex data structures in python. The basic data types are the following:* Strings (for text)* Numeric types (integers and decimals)* Booleans BooleansBooleans represent the truth or success of a statement, and are commonly used for branching and checking status in code.They can take two values: `True` or `False`.
###Code
bool_1 = True
bool_2 = False
print(bool_1)
print(bool_2)
###Output
True
False
###Markdown
If you remember from our strings session, we could execute a command that checks in a string appears within another. For example:
###Code
lookfor = "Trump"
text = """Three American prisoners freed from North Korea arrived here early
Thursday to a personal welcome from President Trump, who traveled to an air
base in the middle of the night to meet them."""
trump_in_text = lookfor in text
print("Does Trump appear in the text?", trump_in_text)
###Output
Does Trump appear in the text? True
###Markdown
Boolean Operations:Frequently, one wants to combine or modify boolean values. Python has several operations for just this purpose:+ `not a`: returns the opposite value of `a`.+ `a and b`: returns true if and only if both `a` and `b` are true.+ `a or b`: returns true either `a` or `b` are true, or both.See LPTHW [Exercise 27](http://learnpythonthehardway.org/book/ex27.html) Like mathematical expressions, boolean expressions can be nested using parentheses.
###Code
var1 = 5
var2 = 6
var3 = 7
###Output
_____no_output_____
###Markdown
Consider the outcomes of the following examples
###Code
print(var1 + var2 == 11)
print(var2 + var3 == 13)
print(var1 + var2 == 11 and var2 + var3 == 13)
print(var1 + var2 == 12 and var2 + var3 == 13)
print(var1 + var2 == 12 or var2 + var3 == 13)
print((not var1 + var2 == 12) or (var2 + var3 == 14))
###Output
True
###Markdown
ExerciseComplete Exercises 1-12 in [28](http://learnpythonthehardway.org/book/ex28.html) at LPTHW. You can find them also below. Try to find the outcome before executing the cell.
###Code
# 1
True and True
# 2
False and True
# 3
1 == 1 and 2 == 1
# 4
"test" == "test"
# 5
1 == 1 or 2 != 1
# 6
True and 1 == 1
# 7
False and 0 != 0
# 8
True or 1 == 1
# 9
"test" == "testing"
# 10
1 != 0 and 2 == 1
# 11
"test" != "testing"
# 12
"test" == 1
###Output
_____no_output_____
###Markdown
Now Complete Exercises 12-20 in [28](http://learnpythonthehardway.org/book/ex28.html). But this time let's examine how to evaluate these expressions on a step by step basis.
###Code
# 13
not (True and False)
# 14
not (1 == 1 and 0 != 1)
# 15
not (10 == 1 or 1000 == 1000)
# 16
not (1 != 10 or 3 == 4)
# 17
not ("testing" == "testing" and "Zed" == "Cool Guy")
# 18
1 == 1 and (not ("testing" == 1 or 1 == 0))
# 19
"chunky" == "bacon" and (not (3 == 4 or 3 == 3))
# 20
3 == 3 and (not ("testing" == "testing" or "Python" == "Fun"))
# bonus
3 != 4 and not ("testing" != "test" or "Python" == "Python")
###Output
_____no_output_____
###Markdown
ExerciseNow let's try to write the boolean expressions that will evaluate different conditions, given a set of other variables.
###Code
# To drink alcohol, you need to be above 21 yo
age = 18
# your code here, replace "False" with an expression
can_drink_alcohol = False
print(f"Age: {age}; can drink alcohol? {can_drink_alcohol}")
# To get a driving license you need to be above 16 yo
age = 18
# your code here, replace "False" with an expression
can_get_driving_license = False
print(f"Age: {age}; can get driving license? {can_get_driving_license}")
# You need to be a US Citizen to have a passport
us_citizen = True
# your code here, replace "False" with an expression
can_get_us_passport = False
print(f"US Citizen: {us_citizen}; can get US passport? {can_get_us_passport}")
# You need to be above 18 and a US Citizen
age = 18
us_citizen = True
# your code here, replace "False" with an expression
can_vote = False
print(f"US Citizen: {us_citizen}; Age: {age}\nCan Vote? {can_vote}")
# You need to be above 35, a US Citizen, and born in the US
age = 70
born_in_us = True
us_citizen = True
# your code here, replace "False" with an expression
can_be_president = False
print(f"US Citizen: {us_citizen}; Age: {age}; Born in US? {born_in_us}")
print(f"Can be president? {can_be_president}")
# Can you become citizen?
# You qualify for a citizen if any of the following holds
# * Your parents are US Citizens and you are under 18
# * You have been born in the US
age = 19
parents_citizens = False
born_in_us = True
# your code here, replace "False" with an expression
citizen_eligible = False
print(f"Citizen parents: {parents_citizens}")
print(f"Age: {age}")
print(f"Born in US? {born_in_us}")
print(f"Eligible for Citizen? {citizen_eligible}")
###Output
Citizen parents: False
Age: 19
Born in US? True
Eligible for Citizen? False
###Markdown
Control Structures: if statementsTraversing over data and making decisions based upon data are a common aspect of every programming language, known as control flow. Python provides a rich control flow, with a lot of conveniences for the power users. Here, we're just going to talk about the basics, to learn more, please [consult the documentation](http://docs.python.org/2/tutorial/controlflow.html). A common theme throughout this discussion of control structures is the notion of a "block of code." Blocks of code are **demarcated by a specific level of indentation**, typically separated from the surrounding code by some control structure elements, immediately preceeded by a colon, `:`. We'll see examples below. Finally, note that control structures can be nested arbitrarily, depending on the tasks you're trying to accomplish. if Statements:**See also LPTHW, Exp 29, 30, and 31.**If statements are perhaps the most widely used of all control structures. An if statement consists of a code block and an argument. The if statement evaluates the boolean value of it's argument, executing the code block if that argument is true.
###Code
execute = False
if execute:
print("Of course!")
print("This will execute as well")
execute = False
if execute:
print("Me? Nobody?")
print("Really? Nobody?")
print("I am not nested, I will show up!")
###Output
I am not nested, I will show up!
###Markdown
And here is an `if` statement paired with an `else`.
###Code
lookfor = "Trump"
text = """
Three American prisoners freed from North Korea arrived here early Thursday
to a personal welcome from President Trump, who traveled to an air
base in the middle of the night to meet them.
"""
if lookfor in text:
print(lookfor, "appears in the text")
else:
print(lookfor, "does not appear in the text")
lookfor = "Obama"
text = """
Three American prisoners freed from North Korea arrived
here early Thursday to a personal welcome from President Trump,
who traveled to an air base in the middle of the night to meet them.
"""
if lookfor in text:
print(lookfor, "appears in the text")
else:
print(lookfor, "does not appear in the text")
###Output
Obama does not appear in the text
###Markdown
Each argument in the above if statements is a boolean expression. Often you want to have alternatives, blocks of code that get evaluated in the event that the argument to an if statement is false. This is where **`elif`** (else if) and else come in. An **`elif`** is evaluated if all preceding if or elif arguments have evaluated to false. The else statement is the last resort, assigning the code that gets executed if no if or elif above it is true. These statements are optional, and can be added to an if statement in any order, with at most one code block being evaluated. An else will always have it's code be executed, if nothing above it is true.
###Code
status = "Senior"
if status == "Freshman":
print("Hello newbie!")
print("How's college treating you?")
elif status == "Sophomore":
print("Welcome back!")
elif status == "Junior":
print("Almost there, almost there")
elif status == "Senior":
print("You can drink now! You will need it.")
elif status == "Senior":
print("The secret of life is 42. But you will never see this")
else:
print("Are you a graduate student? Or (gasp!) faculty?")
###Output
You can drink now! You will need it.
###Markdown
Exercise * You need to be 21 years old and above to drink alcohol. Write a conditional expression that checks the age, and prints out whether the person is allowed to drink alcohol.
###Code
age = 20
if age >= 21:
print("You are above 21, you can drink")
else:
print("You are too young. Wait for", 21 - age, "years")
age = 18
if age >= 21:
print("You are above 21, you can drink")
else:
print("You are too young. Wait for", 21 - age, "years")
###Output
You are too young. Wait for 3 years
###Markdown
* You need to be 16 years old and above to drive. If you are above 16, you also need to have a driving license. Write a conditional expression that checks the age and prints out: (a) whether the person is too young to drive, (b) whether the person satisfies the age criteria but needs a driving license, or (c) the person can drive.
###Code
age = 18
has_driving_license = False
if age < 16:
print("You are too young to drive")
else:
if has_driving_license:
print("You can drive")
else:
print("You are old enough to drive, but you need a driving license")
age = 15
has_driving_license = True
if age >= 16 and has_driving_license:
print("You can drive")
elif age >= 16 and not has_driving_license:
print("You are old enough to drive, but you need a driving license")
else:
print("You are too young to drive")
age = 18
has_driving_license = False
if age < 16:
print("You are too young to drive")
else:
if has_driving_license:
print("You can drive")
else:
print("You are old enough to drive, but you need a driving license")
age = 18
has_driving_license = False
if age >= 16 and has_driving_license:
print("You can drive")
elif age >= 16 and not has_driving_license:
print("You are old enough to drive, but you need a driving license")
else:
print("You are too young to drive")
###Output
You are old enough to drive, but you need a driving license
###Markdown
* You need to be above 18 and a US Citizen, to be able to vote. You also need to be registered. Write the conditional expression that checks for these conditions and prints out whether the person can vote. If the person cannot vote, the code should print out the reason (below 18, or not a US citizen, or not registered, or a combination thereof).
###Code
age = 15
us_citizen = False
registered = True
if age >= 18 and us_citizen and registered:
print("You can vote")
else:
print("You cannot vote")
# Now let's explain the reason
if age < 18:
print("You are below 18")
if not us_citizen:
print("You are not a US citizen")
if not registered:
print("You are not registered")
age = 15
us_citizen = False
registered = True
if age >= 18 and us_citizen and registered:
print("You can vote")
else:
print("You cannot vote")
if age < 18:
print("You are below 18")
if not us_citizen:
print("You are not a US citizen")
if not registered:
print("You are not registered")
###Output
You cannot vote
You are below 18
You are not a US citizen
###Markdown
* You qualify for US citizen if any of the following holds: (a) Your parents are US Citizens and you are under 18, (b) You have been born in the US. Write the conditional expression that checks if a person is eligible to become a US citizen. If the person is not eligible, the code should print out the reason.
###Code
age = 15
parents_citizens = False
born_in_us = False
if (age < 18 and parents_citizens) or born_in_us:
print("You can become a US citizen")
else: # none of the two conditions around the or were True
print("You cannot become a US citizen")
if not born_in_us:
print("You were not born in the US")
if not (age < 18 and parents_citizens):
print("You need to be below 18 and your parents need to be citizens")
age = 16
parents_citizens = True
born_in_us = False
if (age < 18 and parents_citizens) or born_in_us:
print("You can become a US citizen")
else: # none of the conditions were true
if not born_in_us:
print("You were not born in the US")
if not (age < 18 and parents_citizens):
print("You need to be below 18 and your parents need to be citizens")
###Output
You can become a US citizen
|
0.12/_downloads/plot_stats_spatio_temporal_cluster_sensors.ipynb | ###Markdown
.. _stats_cluster_sensors_2samp_spatial: Spatiotemporal permutation F-test on full sensor dataTests for differential evoked responses in at leastone condition using a permutation clustering test.The FieldTrip neighbor templates will be used to determinethe adjacency between sensors. This serves as a spatial priorto the clustering. Significant spatiotemporal clusters will thenbe visualized using custom matplotlib code.
###Code
# Authors: Denis Engemann <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
from mne.viz import plot_topomap
import mne
from mne.stats import spatio_temporal_cluster_test
from mne.datasets import sample
from mne.channels import read_ch_connectivity
print(__doc__)
###Output
_____no_output_____
###Markdown
Set parameters--------------
###Code
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id = {'Aud_L': 1, 'Aud_R': 2, 'Vis_L': 3, 'Vis_R': 4}
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 30)
events = mne.read_events(event_fname)
###Output
_____no_output_____
###Markdown
Read epochs for the channel of interest---------------------------------------
###Code
picks = mne.pick_types(raw.info, meg='mag', eog=True)
reject = dict(mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
epochs.drop_channels(['EOG 061'])
epochs.equalize_event_counts(event_id, copy=False)
condition_names = 'Aud_L', 'Aud_R', 'Vis_L', 'Vis_R'
X = [epochs[k].get_data() for k in condition_names] # as 3D matrix
X = [np.transpose(x, (0, 2, 1)) for x in X] # transpose for clustering
###Output
_____no_output_____
###Markdown
Load FieldTrip neighbor definition to setup sensor connectivity---------------------------------------------------------------
###Code
connectivity, ch_names = read_ch_connectivity('neuromag306mag')
print(type(connectivity)) # it's a sparse matrix!
plt.imshow(connectivity.toarray(), cmap='gray', origin='lower',
interpolation='nearest')
plt.xlabel('{} Magnetometers'.format(len(ch_names)))
plt.ylabel('{} Magnetometers'.format(len(ch_names)))
plt.title('Between-sensor adjacency')
###Output
_____no_output_____
###Markdown
Compute permutation statistic-----------------------------How does it work? We use clustering to `bind` together features which aresimilar. Our features are the magnetic fields measured over our sensorarray at different times. This reduces the multiple comparison problem.To compute the actual test-statistic, we first sum all F-values in allclusters. We end up with one statistic for each cluster.Then we generate a distribution from the data by shuffling our conditionsbetween our samples and recomputing our clusters and the test statistics.We test for the significance of a given cluster by computing the probabilityof observing a cluster of that size. For more background read:Maris/Oostenveld (2007), "Nonparametric statistical testing of EEG- andMEG-data" Journal of Neuroscience Methods, Vol. 164, No. 1., pp. 177-190.doi:10.1016/j.jneumeth.2007.03.024
###Code
# set cluster threshold
threshold = 50.0 # very high, but the test is quite sensitive on this data
# set family-wise p-value
p_accept = 0.001
cluster_stats = spatio_temporal_cluster_test(X, n_permutations=1000,
threshold=threshold, tail=1,
n_jobs=1,
connectivity=connectivity)
T_obs, clusters, p_values, _ = cluster_stats
good_cluster_inds = np.where(p_values < p_accept)[0]
###Output
_____no_output_____
###Markdown
Note. The same functions work with source estimate. The only differencesare the origin of the data, the size, and the connectivity definition.It can be used for single trials or for groups of subjects.Visualize clusters------------------
###Code
# configure variables for visualization
times = epochs.times * 1e3
colors = 'r', 'r', 'steelblue', 'steelblue'
linestyles = '-', '--', '-', '--'
# grand average as numpy arrray
grand_ave = np.array(X).mean(axis=1)
# get sensor positions via layout
pos = mne.find_layout(epochs.info).pos
# loop over significant clusters
for i_clu, clu_idx in enumerate(good_cluster_inds):
# unpack cluster information, get unique indices
time_inds, space_inds = np.squeeze(clusters[clu_idx])
ch_inds = np.unique(space_inds)
time_inds = np.unique(time_inds)
# get topography for F stat
f_map = T_obs[time_inds, ...].mean(axis=0)
# get signals at significant sensors
signals = grand_ave[..., ch_inds].mean(axis=-1)
sig_times = times[time_inds]
# create spatial mask
mask = np.zeros((f_map.shape[0], 1), dtype=bool)
mask[ch_inds, :] = True
# initialize figure
fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))
title = 'Cluster #{0}'.format(i_clu + 1)
fig.suptitle(title, fontsize=14)
# plot average test statistic and mark significant sensors
image, _ = plot_topomap(f_map, pos, mask=mask, axes=ax_topo,
cmap='Reds', vmin=np.min, vmax=np.max)
# advanced matplotlib for showing image with figure and colorbar
# in one plot
divider = make_axes_locatable(ax_topo)
# add axes for colorbar
ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)
plt.colorbar(image, cax=ax_colorbar)
ax_topo.set_xlabel('Averaged F-map ({:0.1f} - {:0.1f} ms)'.format(
*sig_times[[0, -1]]
))
# add new axis for time courses and plot time courses
ax_signals = divider.append_axes('right', size='300%', pad=1.2)
for signal, name, col, ls in zip(signals, condition_names, colors,
linestyles):
ax_signals.plot(times, signal, color=col, linestyle=ls, label=name)
# add information
ax_signals.axvline(0, color='k', linestyle=':', label='stimulus onset')
ax_signals.set_xlim([times[0], times[-1]])
ax_signals.set_xlabel('time [ms]')
ax_signals.set_ylabel('evoked magnetic fields [fT]')
# plot significant time range
ymin, ymax = ax_signals.get_ylim()
ax_signals.fill_betweenx((ymin, ymax), sig_times[0], sig_times[-1],
color='orange', alpha=0.3)
ax_signals.legend(loc='lower right')
ax_signals.set_ylim(ymin, ymax)
# clean up viz
mne.viz.tight_layout(fig=fig)
fig.subplots_adjust(bottom=.05)
plt.show()
###Output
_____no_output_____ |
01_OOI_Profiles.ipynb | ###Markdown
**OHW20 project: OOI profile sections** Visualizing an invisible boundary: locating the shelfbreak front in the northern Mid-Atlantic BightContents[Project description](Project-description)[Notebook setup](Notebook-setup)[Load data](Load-Data)[Time series scatter plots](Plot-scatter-time-series)[Select and plot same day for each profiler](Select-the-same-day-for-each-profiler-and-plot)[Extract downcast](Extract-downcast)[Extract down/upcast or both](Extract-down/upcast-or-both)[Vertical discretization of individual profiles](Below-functions-perform-vertical-discretization-of-individual-profiles)[Split individual profiles](Indices-to-split-individual-profiles)[2D arrays](Sorting-profs-into-2D-arrays-with-equal-depth-range) Project descrtiption The U.S. Ocean Observatories Initiative (OOI) provides data from moorings deployed in the Pioneer Array on the edge of the Northeast U.S. Shelf (NES) in the northern Mid-Atlantic Bight. Profiler moorings support wire-following profiling packages with a multidisciplinary sensor suite including temperature, conductivity (salinity), pressure (depth) and more. Profilers continuously sample these parameters over a specifie depth interval (20 meters below sea surface to 20 meter above the bottom). Although it may be straightforward to acquire and plot data from a single profile, or a single profiler over time, it is much more challenging to be able to visualize and analyze data from multiple profiler moorings. The goal of this project will be to develop flexible, scalable tools to assemble, plot, and analyze data from multiple moorings over time.We are targeting a specific use case: locating the shelfbreak front and illustrating the dynamic movement of this invisible boundary. We would like to develop a flexible, scalable workflow implemented in a Jupyter Notebook to visualize and analyze CTD data (in particular, salinity and depth) from multiple profiler moorings. This use case will serve ocean scientists and students including those involved with NES-LTER.For more information on the Pioneer Array please see (https://oceanobservatories.org/array/coastal-pioneer-array/) Notebook setup
###Code
# Note these libraries are used by Sage's notebook Profile_Examples_for_WHOI.ipynb
import requests
import os
import re
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import cmocean as cmo
from scipy import interpolate
# libraries importated by Filipe in gist.ipynb
import ctd
import gsw
from ctd.read import _basename
# Make the Plots pretty
import seaborn as sns
sns.set()
# Supress open_mfdataset warnings
import warnings
warnings.filterwarnings('ignore')
plt.rcParams.update({'font.size': 14})
###Output
_____no_output_____
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Load DataThe [OOI Data Portal](https://ooinet.oceanobservatories.org/) can be used to access data from any OOI instrument. The data typically come in NetCDF format, a large array-like data file with accompanying metadata. To create a custom data catalog, you need to sign up for a free account. Data for this project has already been requested by Stace through the portal and is available on the Wiki page of the project repository.October 2019 recovered- Inshore water depth of 92 m- Central inshore water depth of 126 m- Central offshore water depth of 146 m- Offshore water depth of 451 m- Distance between inshore and central inshore 15.32 km, between central inshore and central offshore 14.47 km, between central offshore and offshore 17.91 km [(link to cruise report)](https://alfresco.oceanobservatories.org/alfresco/d/d/workspace/SpacesStore/cf3b4ad7-6df6-4c77-8f2b-8c6de78db447/3204-01304_Cruise_Report_Coastal_Pioneer_13_2020-02-14_Ver_1-00.pdf)
###Code
# Provide URL to load a single file that has already been downloaded to OOI's OPENDAP server
# remember to use #fillmismatch
# Create directory that includes all urls
data_url = {}
data_url['inshore'] = 'https://opendap.oceanobservatories.org/thredds/dodsC/ooi/[email protected]/20200806T132326640Z-CP03ISPM-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered/deployment0003_CP03ISPM-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered_20191006T150003-20191031T212239.977728.nc#fillmismatch'
data_url['central_inshore'] = 'https://opendap.oceanobservatories.org/thredds/dodsC/ooi/[email protected]/20200806T132900316Z-CP02PMCI-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered/deployment0013_CP02PMCI-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered_20191007T210003-20191031T212442.986087.nc#fillmismatch'
data_url['central_offshore'] = 'https://opendap.oceanobservatories.org/thredds/dodsC/ooi/[email protected]/20200806T133142674Z-CP02PMCO-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered/deployment0013_CP02PMCO-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered_20191008T140003-20191031T212529.983845.nc#fillmismatch'
data_url['offshore'] = 'https://opendap.oceanobservatories.org/thredds/dodsC/ooi/[email protected]/20200806T133343088Z-CP04OSPM-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered/deployment0012_CP04OSPM-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered_20191013T160003-20191031T211622.990750.nc#fillmismatch'
# Load the data file using xarray
def load2xarray(location):
"""
Load data at given location and reduce to variables of interest.
"""
ds = xr.open_dataset(data_url[location])
ds = ds.swap_dims({'obs': 'time'}) #Swap dimensions
print('Dataset '+ location +' has %d points' % ds.time.size)
ds = ds[['ctdpf_ckl_seawater_pressure','ctdpf_ckl_seawater_temperature','practical_salinity']]
return ds
ds={}
for loc in list(data_url.keys()):
ds[loc] = load2xarray(loc)
###Output
Dataset inshore has 158237 points
Dataset central_inshore has 210513 points
Dataset central_offshore has 236989 points
Dataset offshore has 199587 points
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Plot scatter time series
###Code
#####################################
# plotting function
def scatter_timeseries(ds,location=None):
fig,ax = plt.subplots(figsize=(10,6),nrows=2,sharex=True,constrained_layout=False)
cc = ax[0].scatter(ds.time,ds.ctdpf_ckl_seawater_pressure,s=1,
c=ds.ctdpf_ckl_seawater_temperature,
cmap = plt.get_cmap('RdYlBu_r',30),vmin=14,vmax=23)
plt.colorbar(cc,ax=ax[0],label='temperature [\N{DEGREE SIGN}C]')
# plt.xticks(rotation=30)
ax[0].set_xlim(ds.time[0],ds.time[-1]) # Set the time limits to match the dataset
cc = ax[1].scatter(ds.time,ds.ctdpf_ckl_seawater_pressure,s=1,
c=ds.practical_salinity,
cmap = plt.get_cmap('cmo.haline',30),vmin=34,vmax=36.3)
plt.colorbar(cc,ax=ax[1],label='practical salinity')
# plt.xticks(rotation=30)
for axh in ax.flat: axh.set_ylabel('pressure [dbar]'); axh.invert_yaxis();
if location: ax[0].set_title(location,fontweight='bold')
fig.autofmt_xdate()
return fig,ax
#######################################
# plot scatter timeseries for all locations
for loc in list(data_url.keys()):
scatter_timeseries(ds[loc],loc)
###Output
C:\Users\luizc\Miniconda3\lib\site-packages\pandas\plotting\_matplotlib\converter.py:103: FutureWarning: Using an implicitly registered datetime converter for a matplotlib plotting method. The converter was registered by pandas on import. Future versions of pandas will require you to explicitly register matplotlib converters.
To register the converters:
>>> from pandas.plotting import register_matplotlib_converters
>>> register_matplotlib_converters()
warnings.warn(msg, FutureWarning)
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Select the same day for each profiler and plot This allows us to compare the timing of profilers I would prefer to do this as a function, rather than copy & paste, similar to cell above
###Code
# select the same day from each location
D = "2019-10-15"
dsD={}
for loc in list(data_url.keys()):
dsD[loc] = ds[loc].sel(time=D)
# plot scatter timeseries for all locations - one day
for loc in list(data_url.keys()):
scatter_timeseries(dsD[loc],loc)
###Output
_____no_output_____
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Extract downcastIn order to be able to plot a section we need individual profiles at each location at a given time. We cannot just resample by time because it is profiling data, which is not binned in vertical. A few steps I would take next: - extract the downcast only- assign one time to each profile **Trying to split casts based on changes in pressure**
###Code
# plot first few profiles from initial dataset
dummy = ds['inshore']
dummy.ctdpf_ckl_seawater_pressure[0:2000].plot(marker='*',linestyle='')
###Output
_____no_output_____
###Markdown
We can see that there is always a downcast, followed by a time gap, then upcast and next downcast.
###Code
# plot first cast to check
fig,ax = plt.subplots(ncols=2,figsize=(10,4))
dummy.ctdpf_ckl_seawater_pressure[500:1200].plot(marker='*',linestyle='',ax=ax[0])
# plot difference in pressure
dummy.ctdpf_ckl_seawater_pressure[500:1200].diff(dim='time').plot(marker='*',linestyle='',ax=ax[1])
###Output
_____no_output_____
###Markdown
Based on these plots I can apply a thresshold of 0.1 for diff(pressure). Tried 0.2 but then lose too much data.
###Code
# select only data where pressure is increasing
dummy_down = dummy.where(dummy.ctdpf_ckl_seawater_pressure.diff(dim='time')<0.1)
# plot to check if it worked
dummy_down.ctdpf_ckl_seawater_pressure[0:1000].plot(marker='*',linestyle='')
###Output
_____no_output_____
###Markdown
Seems to work sort of ok?
###Code
# plt scatter of old vs. new
fig,ax = plt.subplots(ncols=2,figsize=(15,4),sharey=True,constrained_layout=True)
ax[0].scatter(dummy.time,dummy.ctdpf_ckl_seawater_pressure,s=1,
c=dummy.ctdpf_ckl_seawater_temperature,
cmap = plt.get_cmap('RdYlBu_r',30),vmin=14,vmax=23)
ax[0].set_title('all data')
# ax[0].invert_yaxis()
ax[1].scatter(dummy_down.time,dummy_down.ctdpf_ckl_seawater_pressure,s=1,
c=dummy_down.ctdpf_ckl_seawater_temperature,
cmap = plt.get_cmap('RdYlBu_r',30),vmin=14,vmax=23)
ax[1].set_title('down cast only')
ax[1].invert_yaxis()
###Output
_____no_output_____
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Extract down/upcast or both
###Code
def get_cast(ctd:xr, cast:str = 'down'):
"""
Extract downcast, upcast or both and assign a specific profile attribute based on the cast
"""
if cast == 'up':
# select only data where pressure is decreasing
down = ctd.where((np.diff(ctd.ctdpf_ckl_seawater_pressure) < 0.1) &
(np.fabs(ctd.ctdpf_ckl_seawater_pressure.diff(dim = 'time')) > .1)).dropna(dim = 'time')
# out = down.assign(ctdpf_ckl_cast=xr.ones_like(down['ctdpf_ckl_seawater_pressure']) * 1)
out = down.assign(ctdpf_ckl_cast='upcast')
return out
if cast == 'down':
# select only data where pressure is increasing
down = ctd.where(ctd.ctdpf_ckl_seawater_pressure.diff(dim = 'time') > 0.1).dropna(dim = 'time')
# out = down.assign(ctdpf_ckl_cast=xr.ones_like(down['ctdpf_ckl_seawater_pressure']) * 2)
out = down.assign(ctdpf_ckl_cast='downcast')
return out
if cast == 'full':
down = ctd.where(((np.diff(ctd.ctdpf_ckl_seawater_pressure) < 0.1) &
(np.fabs(ctd.ctdpf_ckl_seawater_pressure.diff(dim = 'time')) > .1)) |
(ctd.ctdpf_ckl_seawater_pressure.diff(dim = 'time') > 0.1)).dropna(dim = 'time')
idx = np.where(np.diff(xr.concat([down.ctdpf_ckl_seawater_pressure[0],
down.ctdpf_ckl_seawater_pressure], dim='time')) > 0.1, 'downcast', 'upcast')
out = down.assign(ctdpf_ckl_cast=xr.DataArray(idx, dims=["time"]))
return out
if cast not in ('up', 'down', 'full'):
raise NameError(
f'Expected cast name to be `up`, `down`, or `full`, instead got {cast}'
)
###Output
_____no_output_____
###Markdown
Function to plot the timeseries
###Code
def plot_cast(sds:xr, label:str, ax, c=None, cmap=None) -> None:
if 'temp' in label:
c = sds.ctdpf_ckl_seawater_temperature
cmap = plt.get_cmap('cmo.thermal',30)
if 'sal' in label:
c = sds.practical_salinity
cmap = plt.get_cmap('cmo.haline',30)
vmin, vmax = c.min(), c.max()
s = ax.scatter(sds.time,
sds.ctdpf_ckl_seawater_pressure,
s=1, c=c, cmap=cmap, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="2%", pad=0.05)
plt.colorbar(s, cax=cax, label=label)
for tlab in ax.get_xticklabels():
tlab.set_rotation(40)
tlab.set_horizontalalignment('right')
###Output
_____no_output_____
###Markdown
Example with downcast
###Code
%time
downcast={}
cast = 'down'
for loc in list(data_url.keys())[:1]:
downcast[loc] = get_cast(ds[loc], cast=cast)
fig, ax = plt.subplots(ncols=2,figsize=(15,4), sharey=True, constrained_layout=True)
plot_cast(sds=downcast[loc],
label='temperature [\N{DEGREE SIGN}C]', ax=ax[0])
plot_cast(sds=downcast[loc],
label='practical salinity', ax=ax[1])
ax[1].invert_yaxis()
fig.suptitle(f"{loc} [{cast}cast only]", fontweight='bold')
fig.autofmt_xdate()
plt.subplots_adjust(hspace=0.5)
###Output
Wall time: 0 ns
###Markdown
Example with upcast
###Code
%time
upcast={}
cast = 'up'
for loc in list(data_url.keys())[:1]:
upcast[loc] = get_cast(ds[loc], cast=cast)
# plt scatter of old vs. new
fig, ax = plt.subplots(ncols=2,figsize=(15,4), sharey=True, constrained_layout=True)
plot_cast(sds=upcast[loc],
label='temperature [\N{DEGREE SIGN}C]', ax=ax[0])
plot_cast(sds=upcast[loc],
label='practical salinity', ax=ax[1])
ax[1].invert_yaxis()
fig.suptitle(f"{loc} [{cast}cast only]", fontweight='bold')
fig.autofmt_xdate()
plt.subplots_adjust(hspace=0.5)
###Output
_____no_output_____
###Markdown
**Next step: Assign only one time for each profile?**I think we can assign only one time for each profile or convert the time series into 2D profile arrays.From that, we can work with individual profiles quite easily including vertically interpolation/binning Example with fullcast with intermittent values removed
###Code
fullcast={}
cast = 'full'
for loc in list(data_url.keys())[:1]:
fullcast[loc] = get_cast(ds[loc], cast=cast)
fig, ax = plt.subplots(ncols=2,figsize=(15,4), sharey=True, constrained_layout=True)
# display one of the casts just for comparison with previous
sds = fullcast[loc].where(fullcast[loc].ctdpf_ckl_cast == 'upcast')
plot_cast(sds=sds,
label='temperature [\N{DEGREE SIGN}C]', ax=ax[0])
plot_cast(sds=sds,
label='practical salinity', ax=ax[1])
ax[1].invert_yaxis()
fig.suptitle(f"{loc} [{cast}cast]", fontweight='bold')
fig.autofmt_xdate()
plt.subplots_adjust(hspace=0.5)
###Output
_____no_output_____
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Below functions perform vertical discretization of individual profiles Optionally, smooth the vertical profile
###Code
def profile_filt(data: dict, key: str, window_length: int, polyorder: int, profile_disp: bool) -> dict:
"""
Profile smoothing using `savgol_filter`. In general `savgol_filter` produces good results compared to other
methods I have tried. For more, please check https://docs.scipy.org/doc/scipy/reference/signal.html
"""
from scipy.signal import savgol_filter
out = data
out[key] = savgol_filter(data[key], window_length=window_length, polyorder=polyorder)
if profile_disp is True:
fig, ax = plt.subplots()
ax.plot(out[key], data['pres'], '-r', label='savgol_filter')
ax.plot(data[key], data['pres'], '-k', label='original')
ax.invert_yaxis()
plt.show()
return out
def profile_interp(pres: np.array, y: np.array, key: str, start: float = 20., end: float = 100.,
step: float = 1., method: str = 'binning', filt_profile: bool = False,
window_length: int = 5, polyorder: int = 1, profile_disp: bool = False) -> dict:
"""
Interpolate CTD profiles into a constant sampling rate.
Optionally, smooth the profile. Often needed in the case of fluorescence profiles
:param: pres - pressure or any other x-like var
:param: y - temperature, salinity, etc.
:param: start - start position of vertical discretization (pressure). default .5
:param: end - end position of the vertical discretization (pressure). default 100
:param: step - discretization step. default 1
:param: method - discretization method (binning or interpolation). default binning
:param: filt_profile - whether to filter the profile or not (True-filter). default False
:param: window_length - if filt_profile is set to True: the length of the filter window
(i.e., the number of coefficients). default 5
:param: polyorder - order of the polynomial used to fit the samples. default 1
:param: profile_disp - if filt_profile is set to True: displayed the original versus filtered profile
"""
znew = np.arange(start, end + step, step)
if window_length % 2 == 0:
window_length -= 1
sz = pres.size
if sz % 2 == 0:
sz -= 1
# window size == 5 or else odd
window_length = min(window_length, sz)
polyorder = min(polyorder, window_length)
if 'bin' in method:
interp_prof = []
append = interp_prof.append
# There is a 'groupby' command from xarray which is handy.
# But due to time constraint I went the traditional way.
for i, z in enumerate(znew[:-1]):
upper = z + step / 2
lower = z - step / 2
if i == 0:
lower = max(0, z - step / 2)
idx = np.where((pres > lower) & (pres <= upper))[0]
if idx.size == 0:
append(np.nan)
continue
if y[idx].mean().values > 100:
print(y[idx])
append(y[idx].mean().values)
out = {'pres': znew[:-1], key: np.array(interp_prof)}
if filt_profile is True:
return profile_filt(data=out, key=key,
window_length=window_length,
polyorder=polyorder,
profile_disp=profile_disp)
return out
if 'interp' in method:
# temperature, salinity, etc
f = interpolate.interp1d(pres, y, fill_value=(np.nan, np.nan))
out = {'pres': znew, key: f(znew)}
if filt_profile is True:
return profile_filt(data=out, key=key,
window_length=window_length,
polyorder=polyorder,
profile_disp=profile_disp)
return out
###Output
_____no_output_____
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Indices to split individual profiles
###Code
def split_profiles(pres: np.array) -> tuple:
pos = np.where(np.diff(pres) < 0)[0]
start_points = np.hstack((0, pos + 1))
end_points = np.hstack((pos, pres.size))
return start_points, end_points
start_idx, end_idx = split_profiles(pres=downcast['inshore'].ctdpf_ckl_seawater_pressure)
start_idx, end_idx, start_idx.size, end_idx.size
###Output
_____no_output_____
###Markdown
Vis Split Profile
###Code
fig, ax = plt.subplots(1,2, figsize=(14, 12), sharey='all')
z = downcast['inshore'].ctdpf_ckl_seawater_pressure[start_idx[0]:end_idx[0]]
t = downcast['inshore'].ctdpf_ckl_seawater_temperature[start_idx[0]:end_idx[0]]
s = downcast['inshore'].practical_salinity[start_idx[0]:end_idx[0]]
ax[0].plot(t, -z, label='original')
ax[1].plot(s, -z)
# Interp/bin profile
keys = 'temp', 'sal'
for i, y in enumerate((t, s)):
# discretization by grouping
out = profile_interp(pres=z, y=y, key=keys[i], start=np.floor(z.min()),
end=np.ceil(z.max()), step=1, method='binning')
ax[i].plot(out[keys[i]], -out['pres'], '-k', label='binned')
# discretization by interpolation
out = profile_interp(pres=z, y=y, key=keys[i], start=np.ceil(z.min()),
end=np.floor(z.max()), step=1, method='interpolate')
ax[i].plot(out[keys[i]], -out['pres'], ':r', label='interolated')
# discretization and smoothing
out = profile_interp(pres=z, y=y, key=keys[i], start=np.floor(z.min()),
end=np.ceil(z.max()), step=1, method='binning', filt_profile=True)
ax[i].plot(out[keys[i]], -out['pres'], '--g', label='smoothed')
ax[0].legend()
###Output
_____no_output_____
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Sorting profs into 2D arrays with equal depth range
###Code
sal = []
temp = []
depth = []
dates = []
append_s = sal.append
append_t = temp.append
append_d = dates.append
for i, (sidx, eidx) in enumerate(zip(start_idx, end_idx)):
s = downcast['inshore'].practical_salinity[sidx:eidx+1]
t = downcast['inshore'].ctdpf_ckl_seawater_temperature[sidx:eidx+1]
z = downcast['inshore'].ctdpf_ckl_seawater_pressure[sidx:eidx+1]
p = (100 * i + 1) / start_idx.size
if z.size < 10:
if p % 5 < 0.1:
print(f'Start: {sidx:>6} | End: {eidx:>6} | ArrayLen: {t.size:>4} | Skip | {p:.2f}%')
continue
if p % 5 < 0.1:
print(f'Start: {sidx:>6} | End: {eidx:>6} | ArrayLen: {t.size:>4} | {p:.2f}%')
append_d(downcast['inshore'].coords['time'][sidx:eidx+1].astype('float').values.mean())
# Interp/bin profile
# discretization by grouping
out_s = profile_interp(pres=z, y=s, key='sal')
append_s(out_s['sal'])
if sidx == 0:
depth = out_s['pres']
out_t = profile_interp(pres=z, y=t, key='temp')
append_t(out_t['temp'])
sal = np.array(sal).T
temp = np.array(temp).T
dates = np.repeat(np.array(dates).reshape(1, -1), sal.shape[0], axis=0)
depth = np.repeat(np.array(depth).reshape(-1, 1), sal.shape[1], axis=1)
print(sal, sal.shape, temp.shape, dates.shape, depth.shape)
fig, ax = plt.subplots(2,1, figsize=(12, 7), sharex='all')
ax[0].pcolormesh(dates, depth, np.ma.masked_where(np.isnan(sal), sal),
cmap = plt.get_cmap('cmo.haline',30))
ax[0].set_ylim(depth.min(), depth.max())
ax[0].invert_yaxis()
ax[1].pcolormesh(dates, depth, np.ma.masked_where(np.isnan(temp), temp),
cmap=plt.get_cmap('cmo.thermal',30))
ax[1].set_ylim(depth.min(), depth.max())
ax[1].invert_yaxis()
###Output
_____no_output_____
###Markdown
**OHW20 project: OOI profile sections** Visualizing an invisible boundary: locating the shelfbreak front in the northern Mid-Atlantic BightContents[Project description](Project-description)[Notebook setup](Notebook-setup)[Load data](Load-Data)[Time series scatter plots](Plot-scatter-time-series)[Extract downcast](Extract-downcast)[Extract down/upcast or both](Extract-down/upcast-or-both)[Vertical discretization of individual profiles](Below-functions-perform-vertical-discretization-of-individual-profiles)[Indicies to split individual profiles](Indices-to-split-individual-profiles) Project descrtiption The U.S. Ocean Observatories Initiative (OOI) provides data from moorings deployed in the Pioneer Array on the edge of the Northeast U.S. Shelf (NES) in the northern Mid-Atlantic Bight. Profiler moorings support wire-following profiling packages with a multidisciplinary sensor suite including temperature, conductivity (salinity), pressure (depth) and more. Profilers continuously sample these parameters over a specifie depth interval (20 meters below sea surface to 20 meter above the bottom). Although it may be straightforward to acquire and plot data from a single profile, or a single profiler over time, it is much more challenging to be able to visualize and analyze data from multiple profiler moorings. The goal of this project will be to develop flexible, scalable tools to assemble, plot, and analyze data from multiple moorings over time.We are targeting a specific use case: locating the shelfbreak front and illustrating the dynamic movement of this invisible boundary. We would like to develop a flexible, scalable workflow implemented in a Jupyter Notebook to visualize and analyze CTD data (in particular, salinity and depth) from multiple profiler moorings. This use case will serve ocean scientists and students including those involved with NES-LTER.For more information on the Pioneer Array please see (https://oceanobservatories.org/array/coastal-pioneer-array/) Notebook setup
###Code
# Note these libraries are used by Sage's notebook Profile_Examples_for_WHOI.ipynb
import requests
import os
import re
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import cmocean as cmo
# libraries importate by Filipe in gist.ipynb
import ctd
import gsw
from ctd.read import _basename
# Make the Plots pretty
import seaborn as sns
sns.set()
# Supress open_mfdataset warnings
import warnings
warnings.filterwarnings('ignore')
plt.rcParams.update({'font.size': 14})
###Output
_____no_output_____
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Load DataThe [OOI Data Portal](https://ooinet.oceanobservatories.org/) can be used to access data from any OOI instrument. The data typically come in NetCDF format. To create a custom data catalog, you need to sign up for a free account. Data for this project has already been requested by Stace through the portal and is available on the Wiki page of the project repository.October 2019 recovered- Inshore water depth of 92 m- Central inshore water depth of 126 m- Central offshore water depth of 146 m- Offshore water depth of 451 m- Distance between inshore and central inshore 15.32 km, between central inshore and central offshore 14.47 km, between central offshore and offshore 17.91 km (link to cruise report)
###Code
# Provide URL to load a single file that has already been downloaded to OOI's OPENDAP server
# remember to use #fillmismatch
# Create directory that includes all urls
data_url = {}
data_url['inshore'] = 'https://opendap.oceanobservatories.org/thredds/dodsC/ooi/[email protected]/20200806T132326640Z-CP03ISPM-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered/deployment0003_CP03ISPM-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered_20191006T150003-20191031T212239.977728.nc#fillmismatch'
data_url['central_inshore'] = 'https://opendap.oceanobservatories.org/thredds/dodsC/ooi/[email protected]/20200806T132900316Z-CP02PMCI-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered/deployment0013_CP02PMCI-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered_20191007T210003-20191031T212442.986087.nc#fillmismatch'
data_url['central_offshore'] = 'https://opendap.oceanobservatories.org/thredds/dodsC/ooi/[email protected]/20200806T133142674Z-CP02PMCO-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered/deployment0013_CP02PMCO-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered_20191008T140003-20191031T212529.983845.nc#fillmismatch'
data_url['offshore'] = 'https://opendap.oceanobservatories.org/thredds/dodsC/ooi/[email protected]/20200806T133343088Z-CP04OSPM-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered/deployment0012_CP04OSPM-WFP01-03-CTDPFK000-recovered_wfp-ctdpf_ckl_wfp_instrument_recovered_20191013T160003-20191031T211622.990750.nc#fillmismatch'
# Load the data file using xarray
def load2xarray(location):
"""
Load data at given location and reduce to variables of interest.
"""
ds = xr.open_dataset(data_url[location])
ds = ds.swap_dims({'obs': 'time'}) #Swap dimensions
print('Dataset '+ location +' has %d points' % ds.time.size)
ds = ds[['ctdpf_ckl_seawater_pressure','ctdpf_ckl_seawater_temperature','practical_salinity']]
return ds
ds={}
for loc in list(data_url.keys()):
ds[loc] = load2xarray(loc)
###Output
Dataset inshore has 158237 points
Dataset central_inshore has 210513 points
Dataset central_offshore has 236989 points
Dataset offshore has 199587 points
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Plot scatter time series
###Code
#####################################
# plotting function
def scatter_timeseries(ds,location=None):
fig,ax = plt.subplots(figsize=(10,6),nrows=2,sharex=True,constrained_layout=False)
cc = ax[0].scatter(ds.time,ds.ctdpf_ckl_seawater_pressure,s=1,
c=ds.ctdpf_ckl_seawater_temperature,
cmap = plt.get_cmap('RdYlBu_r',30),vmin=14,vmax=23)
plt.colorbar(cc,ax=ax[0],label='temperature [\N{DEGREE SIGN}C]')
# plt.xticks(rotation=30)
ax[0].set_xlim(ds.time[0],ds.time[-1]) # Set the time limits to match the dataset
cc = ax[1].scatter(ds.time,ds.ctdpf_ckl_seawater_pressure,s=1,
c=ds.practical_salinity,
cmap = plt.get_cmap('cmo.haline',30),vmin=34,vmax=36.3)
plt.colorbar(cc,ax=ax[1],label='practical salinity')
# plt.xticks(rotation=30)
for axh in ax.flat: axh.set_ylabel('pressure [dbar]'); axh.invert_yaxis();
if location: ax[0].set_title(location,fontweight='bold')
fig.autofmt_xdate()
return fig,ax
#######################################
# plot scatter timeseries for all locations
for loc in list(data_url.keys()):
scatter_timeseries(ds[loc],loc)
###Output
_____no_output_____
###Markdown
Select the same day for each profiler and plot This allows us to compare the timing of profilers I would prefer to do this as a function, rather than copy & paste, similar to cell above
###Code
# select the same day from each location
D = "2019-10-15"
dsD={}
for loc in list(data_url.keys()):
dsD[loc] = ds[loc].sel(time=D)
# plot scatter timeseries for all locations - one day
for loc in list(data_url.keys()):
scatter_timeseries(dsD[loc],loc)
###Output
_____no_output_____
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Extract downcastIn order to be able to plot a section we need individual profiles at each location at a given time. We cannot just resample by time because it is profiling data, which is not binned in vertical. A few steps I would take next: - extract the downcast only- assign one time to each profile **Trying to split casts based on changes in pressure**
###Code
# plot first few profiles from initial dataset
dummy = ds['inshore']
dummy.ctdpf_ckl_seawater_pressure[0:2000].plot(marker='*',linestyle='')
###Output
_____no_output_____
###Markdown
We can see that there is always a downcast, followed by a time gap, then upcast and next downcast.
###Code
# plot first cast to check
fig,ax = plt.subplots(ncols=2,figsize=(10,4))
dummy.ctdpf_ckl_seawater_pressure[500:1200].plot(marker='*',linestyle='',ax=ax[0])
# plot difference in pressure
dummy.ctdpf_ckl_seawater_pressure[500:1200].diff(dim='time').plot(marker='*',linestyle='',ax=ax[1])
###Output
_____no_output_____
###Markdown
Based on these plots I can apply a thresshold of 0.1 for diff(pressure). Tried 0.2 but then lose too much data.
###Code
# select only data where pressure is increasing
dummy_down = dummy.where(dummy.ctdpf_ckl_seawater_pressure.diff(dim='time')<0.1)
# plot to check if it worked
dummy_down.ctdpf_ckl_seawater_pressure[0:1000].plot(marker='*',linestyle='')
###Output
_____no_output_____
###Markdown
Seems to work sort of ok?
###Code
# plt scatter of old vs. new
fig,ax = plt.subplots(ncols=2,figsize=(15,4),sharey=True,constrained_layout=True)
ax[0].scatter(dummy.time,dummy.ctdpf_ckl_seawater_pressure,s=1,
c=dummy.ctdpf_ckl_seawater_temperature,
cmap = plt.get_cmap('RdYlBu_r',30),vmin=14,vmax=23)
ax[0].set_title('all data')
# ax[0].invert_yaxis()
ax[1].scatter(dummy_down.time,dummy_down.ctdpf_ckl_seawater_pressure,s=1,
c=dummy_down.ctdpf_ckl_seawater_temperature,
cmap = plt.get_cmap('RdYlBu_r',30),vmin=14,vmax=23)
ax[1].set_title('down cast only')
ax[1].invert_yaxis()
###Output
_____no_output_____
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Extract down/upcast or both
###Code
def get_cast(ctd:xr, cast:str = 'down'):
"""
Extract downcast, upcast or both and assign a specific profile attribute based on the cast
"""
if cast == 'up':
# select only data where pressure is decreasing
down = ctd.where((np.diff(ctd.ctdpf_ckl_seawater_pressure) < 0.1) &
(np.fabs(ctd.ctdpf_ckl_seawater_pressure.diff(dim = 'time')) > .1)).dropna(dim = 'time')
# out = down.assign(ctdpf_ckl_cast=xr.ones_like(down['ctdpf_ckl_seawater_pressure']) * 1)
out = down.assign(ctdpf_ckl_cast='upcast')
return out
if cast == 'down':
# select only data where pressure is increasing
down = ctd.where(ctd.ctdpf_ckl_seawater_pressure.diff(dim = 'time') > 0.1).dropna(dim = 'time')
# out = down.assign(ctdpf_ckl_cast=xr.ones_like(down['ctdpf_ckl_seawater_pressure']) * 2)
out = down.assign(ctdpf_ckl_cast='downcast')
return out
if cast == 'full':
down = ctd.where(((np.diff(ctd.ctdpf_ckl_seawater_pressure) < 0.1) &
(np.fabs(ctd.ctdpf_ckl_seawater_pressure.diff(dim = 'time')) > .1)) |
(ctd.ctdpf_ckl_seawater_pressure.diff(dim = 'time') > 0.1)).dropna(dim = 'time')
idx = np.where(np.diff(xr.concat([down.ctdpf_ckl_seawater_pressure[0],
down.ctdpf_ckl_seawater_pressure], dim='time')) > 0.1, 'downcast', 'upcast')
out = down.assign(ctdpf_ckl_cast=xr.DataArray(idx, dims=["time"]))
return out
if cast not in ('up', 'down', 'full'):
raise NameError(
f'Expected cast name to be `up`, `down`, or `full`, instead got {cast}'
)
###Output
_____no_output_____
###Markdown
Function to plot the timeseries
###Code
def plot_cast(sds:xr, label:str, ax, c=None, cmap=None) -> None:
if 'temp' in label:
c = sds.ctdpf_ckl_seawater_temperature
cmap = plt.get_cmap('cmo.thermal',30)
if 'sal' in label:
c = sds.practical_salinity
cmap = plt.get_cmap('cmo.haline',30)
vmin, vmax = c.min(), c.max()
s = ax.scatter(sds.time,
sds.ctdpf_ckl_seawater_pressure,
s=1, c=c, cmap=cmap, vmin=vmin, vmax=vmax)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="2%", pad=0.05)
plt.colorbar(s, cax=cax, label=label)
for tlab in ax.get_xticklabels():
tlab.set_rotation(40)
tlab.set_horizontalalignment('right')
###Output
_____no_output_____
###Markdown
Example with downcast
###Code
%time
downcast={}
cast = 'down'
for loc in list(data_url.keys())[:1]:
downcast[loc] = get_cast(ds[loc], cast=cast)
fig, ax = plt.subplots(ncols=2,figsize=(15,4), sharey=True, constrained_layout=True)
plot_cast(sds=downcast[loc],
label='temperature [\N{DEGREE SIGN}C]', ax=ax[0])
plot_cast(sds=downcast[loc],
label='practical salinity', ax=ax[1])
ax[1].invert_yaxis()
fig.suptitle(f"{loc} [{cast}cast only]", fontweight='bold')
fig.autofmt_xdate()
plt.subplots_adjust(hspace=0.5)
###Output
_____no_output_____
###Markdown
Example with upcast
###Code
%time
upcast={}
cast = 'up'
for loc in list(data_url.keys())[:1]:
upcast[loc] = get_cast(ds[loc], cast=cast)
# plt scatter of old vs. new
fig, ax = plt.subplots(ncols=2,figsize=(15,4), sharey=True, constrained_layout=True)
plot_cast(sds=upcast[loc],
label='temperature [\N{DEGREE SIGN}C]', ax=ax[0])
plot_cast(sds=upcast[loc],
label='practical salinity', ax=ax[1])
ax[1].invert_yaxis()
fig.suptitle(f"{loc} [{cast}cast only]", fontweight='bold')
fig.autofmt_xdate()
plt.subplots_adjust(hspace=0.5)
###Output
_____no_output_____
###Markdown
**Next step: Assign only one time for each profile?**I think we can assign only one time for each profile or convert the time series into 2D profile arrays.From that, we can work with individual profiles quite easily including vertically interpolation/binning Example with fullcast with intermittent values removed
###Code
fullcast={}
cast = 'full'
for loc in list(data_url.keys())[:1]:
fullcast[loc] = get_cast(ds[loc], cast=cast)
fig, ax = plt.subplots(ncols=2,figsize=(15,4), sharey=True, constrained_layout=True)
# display one of the casts just for comparison with previous
sds = fullcast[loc].where(fullcast[loc].ctdpf_ckl_cast == 'upcast')
plot_cast(sds=sds,
label='temperature [\N{DEGREE SIGN}C]', ax=ax[0])
plot_cast(sds=sds,
label='practical salinity', ax=ax[1])
ax[1].invert_yaxis()
fig.suptitle(f"{loc} [{cast}cast]", fontweight='bold')
fig.autofmt_xdate()
plt.subplots_adjust(hspace=0.5)
###Output
_____no_output_____
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Below functions perform vertical discretization of individual profiles Optionally, smooth the vertical profile
###Code
def profile_filt(data: dict, key: str, window_length: int, polyorder: int, profile_disp: bool) -> dict:
"""
Profile smoothing using `savgol_filter`. In general `savgol_filter` produces good results compared to other
methods I have tried. For more, please check https://docs.scipy.org/doc/scipy/reference/signal.html
"""
from scipy.signal import savgol_filter
out = data
out[key] = savgol_filter(data[key], window_length=window_length, polyorder=polyorder)
if profile_disp is True:
fig, ax = plt.subplots()
ax.plot(out[key], data['pres'], '-r', label='savgol_filter')
ax.plot(data[key], data['pres'], '-k', label='original')
ax.invert_yaxis()
plt.show()
return out
def profile_interp(pres: np.array, y: np.array, key: str, start: float = 20., end: float = 100.,
step: float = 1., method: str = 'binning', filt_profile: bool = False,
window_length: int = 5, polyorder: int = 1, profile_disp: bool = False) -> dict:
"""
Interpolate CTD profiles into a constant sampling rate.
Optionally, smooth the profile. Often needed in the case of fluorescence profiles
:param: pres - pressure or any other x-like var
:param: y - temperature, salinity, etc.
:param: start - start position of vertical discretization (pressure). default .5
:param: end - end position of the vertical discretization (pressure). default 100
:param: step - discretization step. default 1
:param: method - discretization method (binning or interpolation). default binning
:param: filt_profile - whether to filter the profile or not (True-filter). default False
:param: window_length - if filt_profile is set to True: the length of the filter window
(i.e., the number of coefficients). default 5
:param: polyorder - order of the polynomial used to fit the samples. default 1
:param: profile_disp - if filt_profile is set to True: displayed the original versus filtered profile
"""
znew = np.arange(start, end + step, step)
if window_length % 2 == 0:
window_length -= 1
sz = pres.size
if sz % 2 == 0:
sz -= 1
# window size == 5 or else odd
window_length = min(window_length, sz)
polyorder = min(polyorder, window_length)
if 'bin' in method:
interp_prof = []
append = interp_prof.append
# There is a 'groupby' command from xarray which is handy.
# But due to time constraint I went the traditional way.
for i, z in enumerate(znew[:-1]):
upper = z + step / 2
lower = z - step / 2
if i == 0:
lower = max(0, z - step / 2)
idx = np.where((pres > lower) & (pres <= upper))[0]
if idx.size == 0:
append(np.nan)
continue
if y[idx].mean().values > 100:
print(y[idx])
append(y[idx].mean().values)
out = {'pres': znew[:-1], key: np.array(interp_prof)}
if filt_profile is True:
return profile_filt(data=out, key=key,
window_length=window_length,
polyorder=polyorder,
profile_disp=profile_disp)
return out
if 'interp' in method:
from scipy import interpolate
# temperature, salinity, etc
f = interpolate.interp1d(pres, y, fill_value=(np.nan, np.nan))
out = {'pres': znew, key: f(znew)}
if filt_profile is True:
return profile_filt(data=out, key=key,
window_length=window_length,
polyorder=polyorder,
profile_disp=profile_disp)
return out
###Output
_____no_output_____
###Markdown
[Back to top](OHW20-project:-OOI-profile-sections) Indices to split individual profiles
###Code
def split_profiles(pres: np.array) -> tuple:
pos = np.where(np.diff(pres) < 0)[0]
start_points = np.hstack((0, pos + 1))
end_points = np.hstack((pos, pres.size))
return start_points, end_points
start_idx, end_idx = split_profiles(pres=downcast['inshore'].ctdpf_ckl_seawater_pressure)
start_idx, end_idx, start_idx.size, end_idx.size
###Output
_____no_output_____
###Markdown
Vis Split Profile
###Code
fig, ax = plt.subplots(1,2, figsize=(14, 12), sharey='all')
z = downcast['inshore'].ctdpf_ckl_seawater_pressure[start_idx[0]:end_idx[0]]
t = downcast['inshore'].ctdpf_ckl_seawater_temperature[start_idx[0]:end_idx[0]]
s = downcast['inshore'].practical_salinity[start_idx[0]:end_idx[0]]
ax[0].plot(t, -z, label='original')
ax[1].plot(s, -z)
# Interp/bin profile
keys = 'temp', 'sal'
for i, y in enumerate((t, s)):
# discretization by grouping
out = profile_interp(pres=z, y=y, key=keys[i], start=np.floor(z.min()),
end=np.ceil(z.max()), step=1, method='binning')
ax[i].plot(out[keys[i]], -out['pres'], '-k', label='binned')
# discretization by interpolation
out = profile_interp(pres=z, y=y, key=keys[i], start=np.ceil(z.min()),
end=np.floor(z.max()), step=1, method='interpolate')
ax[i].plot(out[keys[i]], -out['pres'], ':r', label='interolated')
# discretization and smoothing
out = profile_interp(pres=z, y=y, key=keys[i], start=np.floor(z.min()),
end=np.ceil(z.max()), step=1, method='binning', filt_profile=True)
ax[i].plot(out[keys[i]], -out['pres'], '--g', label='smoothed')
ax[0].legend()
###Output
_____no_output_____
###Markdown
Sorting profs into 2D arrays with equal depth range
###Code
sal = []
temp = []
depth = []
dates = []
append_s = sal.append
append_t = temp.append
append_d = dates.append
for i, (sidx, eidx) in enumerate(zip(start_idx, end_idx)):
s = downcast['inshore'].practical_salinity[sidx:eidx+1]
t = downcast['inshore'].ctdpf_ckl_seawater_temperature[sidx:eidx+1]
z = downcast['inshore'].ctdpf_ckl_seawater_pressure[sidx:eidx+1]
p = (100 * i + 1) / start_idx.size
if z.size < 10:
if p % 5 < 0.1:
print(f'Start: {sidx:>6} | End: {eidx:>6} | ArrayLen: {t.size:>4} | Skip | {p:.2f}%')
continue
if p % 5 < 0.1:
print(f'Start: {sidx:>6} | End: {eidx:>6} | ArrayLen: {t.size:>4} | {p:.2f}%')
append_d(downcast['inshore'].coords['time'][sidx:eidx+1].astype('float').values.mean())
# Interp/bin profile
# discretization by grouping
out_s = profile_interp(pres=z, y=s, key='sal')
append_s(out_s['sal'])
if sidx == 0:
depth = out_s['pres']
out_t = profile_interp(pres=z, y=t, key='temp')
append_t(out_t['temp'])
sal = np.array(sal).T
temp = np.array(temp).T
dates = np.repeat(np.array(dates).reshape(1, -1), sal.shape[0], axis=0)
depth = np.repeat(np.array(depth).reshape(-1, 1), sal.shape[1], axis=1)
print(sal, sal.shape, temp.shape, dates.shape, depth.shape)
fig, ax = plt.subplots(2,1, figsize=(12, 7), sharex='all')
ax[0].pcolormesh(dates, depth, np.ma.masked_where(np.isnan(sal), sal),
cmap = plt.get_cmap('cmo.haline',30))
ax[0].set_ylim(depth.min(), depth.max())
ax[0].invert_yaxis()
ax[1].pcolormesh(dates, depth, np.ma.masked_where(np.isnan(temp), temp),
cmap=plt.get_cmap('cmo.thermal',30))
ax[1].set_ylim(depth.min(), depth.max())
ax[1].invert_yaxis()
###Output
_____no_output_____ |
first_edition/3.6-classifying-newswires.ipynb | ###Markdown
Classifying newswires: a multi-class classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----In the previous section we saw how to classify vector inputs into two mutually exclusive classes using a densely-connected neural network. But what happens when you have more than two classes? In this section, we will build a network to classify Reuters newswires into 46 different mutually-exclusive topics. Since we have many classes, this problem is an instance of "multi-class classification", and since each data point should be classified into only one category, the problem is more specifically an instance of "single-label, multi-class classification". If each data point could have belonged to multiple categories (in our case, topics) then we would be facing a "multi-label, multi-class classification" problem. The Reuters datasetWe will be working with the _Reuters dataset_, a set of short newswires and their topics, published by Reuters in 1986. It's a very simple, widely used toy dataset for text classification. There are 46 different topics; some topics are more represented than others, but each topic has at least 10 examples in the training set.Like IMDB and MNIST, the Reuters dataset comes packaged as part of Keras. Let's take a look right away:
###Code
from keras.datasets import reuters
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Like with the IMDB dataset, the argument `num_words=10000` restricts the data to the 10,000 most frequently occurring words found in the data.We have 8,982 training examples and 2,246 test examples:
###Code
len(train_data)
len(test_data)
###Output
_____no_output_____
###Markdown
As with the IMDB reviews, each example is a list of integers (word indices):
###Code
train_data[10]
###Output
_____no_output_____
###Markdown
Here's how you can decode it back to words, in case you are curious:
###Code
word_index = reuters.get_word_index()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# Note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_newswire
###Output
_____no_output_____
###Markdown
The label associated with an example is an integer between 0 and 45: a topic index.
###Code
train_labels[10]
###Output
_____no_output_____
###Markdown
Preparing the dataWe can vectorize the data with the exact same code as in our previous example:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
To vectorize the labels, there are two possibilities: we could just cast the label list as an integer tensor, or we could use a "one-hot" encoding. One-hot encoding is a widely used format for categorical data, also called "categorical encoding". For a more detailed explanation of one-hot encoding, you can refer to Chapter 6, Section 1. In our case, one-hot encoding of our labels consists in embedding each label as an all-zero vector with a 1 in the place of the label index, e.g.:
###Code
def to_one_hot(labels, dimension=46):
results = np.zeros((len(labels), dimension))
for i, label in enumerate(labels):
results[i, label] = 1.
return results
# Our vectorized training labels
one_hot_train_labels = to_one_hot(train_labels)
# Our vectorized test labels
one_hot_test_labels = to_one_hot(test_labels)
###Output
_____no_output_____
###Markdown
Note that there is a built-in way to do this in Keras, which you have already seen in action in our MNIST example:
###Code
from keras.utils.np_utils import to_categorical
one_hot_train_labels = to_categorical(train_labels)
one_hot_test_labels = to_categorical(test_labels)
###Output
_____no_output_____
###Markdown
Building our networkThis topic classification problem looks very similar to our previous movie review classification problem: in both cases, we are trying to classify short snippets of text. There is however a new constraint here: the number of output classes has gone from 2 to 46, i.e. the dimensionality of the output space is much larger. In a stack of `Dense` layers like what we were using, each layer can only access information present in the output of the previous layer. If one layer drops some information relevant to the classification problem, this information can never be recovered by later layers: each layer can potentially become an "information bottleneck". In our previous example, we were using 16-dimensional intermediate layers, but a 16-dimensional space may be too limited to learn to separate 46 different classes: such small layers may act as information bottlenecks, permanently dropping relevant information.For this reason we will use larger layers. Let's go with 64 units:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
###Output
2022-02-18 18:59:06.398026: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/R/4.0.5/lib/R/lib::/lib:/usr/local/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/java-11-openjdk-amd64/lib/server
2022-02-18 18:59:06.398067: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2022-02-18 18:59:06.398085: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (taylor): /proc/driver/nvidia/version does not exist
2022-02-18 18:59:06.398306: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
###Markdown
There are two other things you should note about this architecture:* We are ending the network with a `Dense` layer of size 46. This means that for each input sample, our network will output a 46-dimensional vector. Each entry in this vector (each dimension) will encode a different output class.* The last layer uses a `softmax` activation. You have already seen this pattern in the MNIST example. It means that the network will output a _probability distribution_ over the 46 different output classes, i.e. for every input sample, the network will produce a 46-dimensional output vector where `output[i]` is the probability that the sample belongs to class `i`. The 46 scores will sum to 1.The best loss function to use in this case is `categorical_crossentropy`. It measures the distance between two probability distributions: in our case, between the probability distribution output by our network, and the true distribution of the labels. By minimizing the distance between these two distributions, we train our network to output something as close as possible to the true labels.
###Code
model.summary()
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Validating our approachLet's set apart 1,000 samples in our training data to use as a validation set:
###Code
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = one_hot_train_labels[:1000]
partial_y_train = one_hot_train_labels[1000:]
###Output
_____no_output_____
###Markdown
Now let's train our network for 20 epochs:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
2022-02-18 19:08:28.712900: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
###Markdown
Let's display its loss and accuracy curves:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
history.history.keys()
plt.clf() # clear figure
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
It seems that the network starts overfitting after 8 epochs. Let's train a new network from scratch for 8 epochs, then let's evaluate it on the test set:
###Code
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=8,
batch_size=512,
validation_data=(x_val, y_val))
results = model.evaluate(x_test, one_hot_test_labels)
results
###Output
_____no_output_____
###Markdown
Our approach reaches an accuracy of ~78%. With a balanced binary classification problem, the accuracy reached by a purely random classifier would be 50%, but in our case it is closer to 19%, so our results seem pretty good, at least when compared to a random baseline:
###Code
import copy
test_labels_copy = copy.copy(test_labels)
np.random.shuffle(test_labels_copy)
float(np.sum(np.array(test_labels) == np.array(test_labels_copy))) / len(test_labels)
###Output
_____no_output_____
###Markdown
Generating predictions on new dataWe can verify that the `predict` method of our model instance returns a probability distribution over all 46 topics. Let's generate topic predictions for all of the test data:
###Code
predictions = model.predict(x_test)
###Output
_____no_output_____
###Markdown
Each entry in `predictions` is a vector of length 46:
###Code
predictions[0].shape
###Output
_____no_output_____
###Markdown
The coefficients in this vector sum to 1:
###Code
np.sum(predictions[0])
###Output
_____no_output_____
###Markdown
The largest entry is the predicted class, i.e. the class with the highest probability:
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
A different way to handle the labels and the lossWe mentioned earlier that another way to encode the labels would be to cast them as an integer tensor, like such:
###Code
y_train = np.array(train_labels)
y_test = np.array(test_labels)
###Output
_____no_output_____
###Markdown
The only thing it would change is the choice of the loss function. Our previous loss, `categorical_crossentropy`, expects the labels to follow a categorical encoding. With integer labels, we should use `sparse_categorical_crossentropy`:
###Code
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc'])
###Output
_____no_output_____
###Markdown
This new loss function is still mathematically the same as `categorical_crossentropy`; it just has a different interface. On the importance of having sufficiently large intermediate layersWe mentioned earlier that since our final outputs were 46-dimensional, we should avoid intermediate layers with much less than 46 hidden units. Now let's try to see what happens when we introduce an information bottleneck by having intermediate layers significantly less than 46-dimensional, e.g. 4-dimensional.
###Code
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(4, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=128,
validation_data=(x_val, y_val))
###Output
Epoch 1/20
63/63 [==============================] - 1s 16ms/step - loss: 3.0881 - accuracy: 0.2796 - val_loss: 2.4175 - val_accuracy: 0.5400
Epoch 2/20
63/63 [==============================] - 1s 19ms/step - loss: 2.0037 - accuracy: 0.5546 - val_loss: 1.7838 - val_accuracy: 0.5660
Epoch 3/20
63/63 [==============================] - 0s 7ms/step - loss: 1.6127 - accuracy: 0.5854 - val_loss: 1.6212 - val_accuracy: 0.5830
Epoch 4/20
63/63 [==============================] - 0s 7ms/step - loss: 1.4393 - accuracy: 0.6024 - val_loss: 1.5500 - val_accuracy: 0.5870
Epoch 5/20
63/63 [==============================] - 0s 6ms/step - loss: 1.3245 - accuracy: 0.6114 - val_loss: 1.5226 - val_accuracy: 0.5890
Epoch 6/20
63/63 [==============================] - 1s 12ms/step - loss: 1.2398 - accuracy: 0.6255 - val_loss: 1.4830 - val_accuracy: 0.6200
Epoch 7/20
63/63 [==============================] - 0s 7ms/step - loss: 1.1703 - accuracy: 0.6502 - val_loss: 1.5063 - val_accuracy: 0.6200
Epoch 8/20
63/63 [==============================] - 0s 5ms/step - loss: 1.1123 - accuracy: 0.6599 - val_loss: 1.4905 - val_accuracy: 0.6250
Epoch 9/20
63/63 [==============================] - 0s 5ms/step - loss: 1.0627 - accuracy: 0.6630 - val_loss: 1.5324 - val_accuracy: 0.6210
Epoch 10/20
63/63 [==============================] - 0s 5ms/step - loss: 1.0167 - accuracy: 0.6695 - val_loss: 1.5506 - val_accuracy: 0.6220
Epoch 11/20
63/63 [==============================] - 0s 7ms/step - loss: 0.9744 - accuracy: 0.6730 - val_loss: 1.5851 - val_accuracy: 0.6220
Epoch 12/20
63/63 [==============================] - 0s 5ms/step - loss: 0.9343 - accuracy: 0.6839 - val_loss: 1.6171 - val_accuracy: 0.6310
Epoch 13/20
63/63 [==============================] - 0s 4ms/step - loss: 0.8941 - accuracy: 0.7055 - val_loss: 1.6598 - val_accuracy: 0.6350
Epoch 14/20
63/63 [==============================] - 0s 4ms/step - loss: 0.8623 - accuracy: 0.7274 - val_loss: 1.7129 - val_accuracy: 0.6440
Epoch 15/20
63/63 [==============================] - 0s 4ms/step - loss: 0.8265 - accuracy: 0.7593 - val_loss: 1.7691 - val_accuracy: 0.6480
Epoch 16/20
63/63 [==============================] - 0s 4ms/step - loss: 0.7981 - accuracy: 0.7781 - val_loss: 1.8676 - val_accuracy: 0.6510
Epoch 17/20
63/63 [==============================] - 0s 4ms/step - loss: 0.7693 - accuracy: 0.7903 - val_loss: 1.8621 - val_accuracy: 0.6530
Epoch 18/20
63/63 [==============================] - 0s 4ms/step - loss: 0.7426 - accuracy: 0.7962 - val_loss: 1.9314 - val_accuracy: 0.6560
Epoch 19/20
63/63 [==============================] - 0s 5ms/step - loss: 0.7217 - accuracy: 0.8018 - val_loss: 1.9695 - val_accuracy: 0.6610
Epoch 20/20
63/63 [==============================] - 0s 6ms/step - loss: 0.7001 - accuracy: 0.8066 - val_loss: 2.0464 - val_accuracy: 0.6600
###Markdown
Classifying newswires: a multi-class classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----In the previous section we saw how to classify vector inputs into two mutually exclusive classes using a densely-connected neural network. But what happens when you have more than two classes? In this section, we will build a network to classify Reuters newswires into 46 different mutually-exclusive topics. Since we have many classes, this problem is an instance of "multi-class classification", and since each data point should be classified into only one category, the problem is more specifically an instance of "single-label, multi-class classification". If each data point could have belonged to multiple categories (in our case, topics) then we would be facing a "multi-label, multi-class classification" problem. The Reuters datasetWe will be working with the _Reuters dataset_, a set of short newswires and their topics, published by Reuters in 1986. It's a very simple, widely used toy dataset for text classification. There are 46 different topics; some topics are more represented than others, but each topic has at least 10 examples in the training set.Like IMDB and MNIST, the Reuters dataset comes packaged as part of Keras. Let's take a look right away:
###Code
from keras.datasets import reuters
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Like with the IMDB dataset, the argument `num_words=10000` restricts the data to the 10,000 most frequently occurring words found in the data.We have 8,982 training examples and 2,246 test examples:
###Code
len(train_data)
len(test_data)
###Output
_____no_output_____
###Markdown
As with the IMDB reviews, each example is a list of integers (word indices):
###Code
train_data[10]
###Output
_____no_output_____
###Markdown
Here's how you can decode it back to words, in case you are curious:
###Code
word_index = reuters.get_word_index()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# Note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_newswire
###Output
_____no_output_____
###Markdown
The label associated with an example is an integer between 0 and 45: a topic index.
###Code
train_labels[10]
###Output
_____no_output_____
###Markdown
Preparing the dataWe can vectorize the data with the exact same code as in our previous example:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
len(x_test[10])
x_test[10]
len(test_data[10])
test_data[10]
x_train
###Output
_____no_output_____
###Markdown
To vectorize the labels, there are two possibilities: we could just cast the label list as an integer tensor, or we could use a "one-hot" encoding. One-hot encoding is a widely used format for categorical data, also called "categorical encoding". For a more detailed explanation of one-hot encoding, you can refer to Chapter 6, Section 1. In our case, one-hot encoding of our labels consists in embedding each label as an all-zero vector with a 1 in the place of the label index, e.g.:
###Code
def to_one_hot(labels, dimension=46):
results = np.zeros((len(labels), dimension))
for i, label in enumerate(labels):
results[i, label] = 1.
return results
# Our vectorized training labels
one_hot_train_labels = to_one_hot(train_labels)
# Our vectorized test labels
one_hot_test_labels = to_one_hot(test_labels)
###Output
_____no_output_____
###Markdown
Note that there is a built-in way to do this in Keras, which you have already seen in action in our MNIST example:
###Code
from keras.utils.np_utils import to_categorical
one_hot_train_labels = to_categorical(train_labels)
one_hot_test_labels = to_categorical(test_labels)
###Output
_____no_output_____
###Markdown
Building our networkThis topic classification problem looks very similar to our previous movie review classification problem: in both cases, we are trying to classify short snippets of text. There is however a new constraint here: the number of output classes has gone from 2 to 46, i.e. the dimensionality of the output space is much larger. In a stack of `Dense` layers like what we were using, each layer can only access information present in the output of the previous layer. If one layer drops some information relevant to the classification problem, this information can never be recovered by later layers: each layer can potentially become an "information bottleneck". In our previous example, we were using 16-dimensional intermediate layers, but a 16-dimensional space may be too limited to learn to separate 46 different classes: such small layers may act as information bottlenecks, permanently dropping relevant information.For this reason we will use larger layers. Let's go with 64 units:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
###Output
_____no_output_____
###Markdown
There are two other things you should note about this architecture:* We are ending the network with a `Dense` layer of size 46. This means that for each input sample, our network will output a 46-dimensional vector. Each entry in this vector (each dimension) will encode a different output class.* The last layer uses a `softmax` activation. You have already seen this pattern in the MNIST example. It means that the network will output a _probability distribution_ over the 46 different output classes, i.e. for every input sample, the network will produce a 46-dimensional output vector where `output[i]` is the probability that the sample belongs to class `i`. The 46 scores will sum to 1.The best loss function to use in this case is `categorical_crossentropy`. It measures the distance between two probability distributions: in our case, between the probability distribution output by our network, and the true distribution of the labels. By minimizing the distance between these two distributions, we train our network to output something as close as possible to the true labels.
###Code
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Validating our approachLet's set apart 1,000 samples in our training data to use as a validation set:
###Code
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = one_hot_train_labels[:1000]
partial_y_train = one_hot_train_labels[1000:]
###Output
_____no_output_____
###Markdown
Now let's train our network for 20 epochs:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Epoch 1/20
16/16 [==============================] - 1s 18ms/step - loss: 2.4898 - accuracy: 0.5328 - val_loss: 1.6725 - val_accuracy: 0.6320
Epoch 2/20
16/16 [==============================] - 0s 10ms/step - loss: 1.3782 - accuracy: 0.7045 - val_loss: 1.2976 - val_accuracy: 0.7150
Epoch 3/20
16/16 [==============================] - 0s 10ms/step - loss: 1.0459 - accuracy: 0.7715 - val_loss: 1.1395 - val_accuracy: 0.7530
Epoch 4/20
16/16 [==============================] - 0s 10ms/step - loss: 0.8348 - accuracy: 0.8188 - val_loss: 1.0532 - val_accuracy: 0.7650
Epoch 5/20
16/16 [==============================] - 0s 10ms/step - loss: 0.6707 - accuracy: 0.8535 - val_loss: 0.9689 - val_accuracy: 0.7970
Epoch 6/20
16/16 [==============================] - 0s 10ms/step - loss: 0.5339 - accuracy: 0.8881 - val_loss: 0.9315 - val_accuracy: 0.7990
Epoch 7/20
16/16 [==============================] - 0s 10ms/step - loss: 0.4307 - accuracy: 0.9082 - val_loss: 0.9210 - val_accuracy: 0.8040
Epoch 8/20
16/16 [==============================] - 0s 10ms/step - loss: 0.3555 - accuracy: 0.9258 - val_loss: 0.8981 - val_accuracy: 0.8120
Epoch 9/20
16/16 [==============================] - 0s 10ms/step - loss: 0.2882 - accuracy: 0.9392 - val_loss: 0.9127 - val_accuracy: 0.8120
Epoch 10/20
16/16 [==============================] - 0s 10ms/step - loss: 0.2470 - accuracy: 0.9441 - val_loss: 0.9323 - val_accuracy: 0.8010
Epoch 11/20
16/16 [==============================] - 0s 10ms/step - loss: 0.2075 - accuracy: 0.9484 - val_loss: 0.9366 - val_accuracy: 0.8050
Epoch 12/20
16/16 [==============================] - 0s 10ms/step - loss: 0.1868 - accuracy: 0.9504 - val_loss: 0.9906 - val_accuracy: 0.7960
Epoch 13/20
16/16 [==============================] - 0s 10ms/step - loss: 0.1656 - accuracy: 0.9543 - val_loss: 0.9662 - val_accuracy: 0.8130
Epoch 14/20
16/16 [==============================] - 0s 10ms/step - loss: 0.1511 - accuracy: 0.9551 - val_loss: 1.0694 - val_accuracy: 0.7860
Epoch 15/20
16/16 [==============================] - 0s 10ms/step - loss: 0.1421 - accuracy: 0.9550 - val_loss: 1.0370 - val_accuracy: 0.7960
Epoch 16/20
16/16 [==============================] - 0s 10ms/step - loss: 0.1348 - accuracy: 0.9572 - val_loss: 0.9891 - val_accuracy: 0.8110
Epoch 17/20
16/16 [==============================] - 0s 10ms/step - loss: 0.1233 - accuracy: 0.9558 - val_loss: 1.0376 - val_accuracy: 0.8110
Epoch 18/20
16/16 [==============================] - 0s 10ms/step - loss: 0.1189 - accuracy: 0.9573 - val_loss: 1.0653 - val_accuracy: 0.7970
Epoch 19/20
16/16 [==============================] - 0s 10ms/step - loss: 0.1145 - accuracy: 0.9592 - val_loss: 1.0573 - val_accuracy: 0.8050
Epoch 20/20
16/16 [==============================] - 0s 9ms/step - loss: 0.1103 - accuracy: 0.9578 - val_loss: 1.1183 - val_accuracy: 0.8020
###Markdown
Let's display its loss and accuracy curves:
###Code
history.history.keys()
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
It seems that the network starts overfitting after 8 epochs. Let's train a new network from scratch for 8 epochs, then let's evaluate it on the test set:
###Code
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=8,
batch_size=512,
validation_data=(x_val, y_val))
results = model.evaluate(x_test, one_hot_test_labels)
results
###Output
_____no_output_____
###Markdown
Our approach reaches an accuracy of ~78%. With a balanced binary classification problem, the accuracy reached by a purely random classifier would be 50%, but in our case it is closer to 19%, so our results seem pretty good, at least when compared to a random baseline:
###Code
import copy
test_labels_copy = copy.copy(test_labels)
np.random.shuffle(test_labels_copy)
float(np.sum(np.array(test_labels) == np.array(test_labels_copy))) / len(test_labels)
###Output
_____no_output_____
###Markdown
Generating predictions on new dataWe can verify that the `predict` method of our model instance returns a probability distribution over all 46 topics. Let's generate topic predictions for all of the test data:
###Code
predictions = model.predict(x_test)
predictions[0]
###Output
_____no_output_____
###Markdown
Each entry in `predictions` is a vector of length 46:
###Code
predictions[0].shape
###Output
_____no_output_____
###Markdown
The coefficients in this vector sum to 1:
###Code
np.sum(predictions[0])
###Output
_____no_output_____
###Markdown
The largest entry is the predicted class, i.e. the class with the highest probability:
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
A different way to handle the labels and the lossWe mentioned earlier that another way to encode the labels would be to cast them as an integer tensor, like such:
###Code
y_train = np.array(train_labels)
y_test = np.array(test_labels)
y_train
partial_x_train
###Output
_____no_output_____
###Markdown
The only thing it would change is the choice of the loss function. Our previous loss, `categorical_crossentropy`, expects the labels to follow a categorical encoding. With integer labels, we should use `sparse_categorical_crossentropy`:
###Code
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc'])
###Output
_____no_output_____
###Markdown
This new loss function is still mathematically the same as `categorical_crossentropy`; it just has a different interface. On the importance of having sufficiently large intermediate layersWe mentioned earlier that since our final outputs were 46-dimensional, we should avoid intermediate layers with much less than 46 hidden units. Now let's try to see what happens when we introduce an information bottleneck by having intermediate layers significantly less than 46-dimensional, e.g. 4-dimensional.
###Code
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(4, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=128,
validation_data=(x_val, y_val))
###Output
Epoch 1/20
63/63 [==============================] - 1s 7ms/step - loss: 3.0142 - accuracy: 0.2335 - val_loss: 2.4561 - val_accuracy: 0.2560
Epoch 2/20
63/63 [==============================] - 0s 6ms/step - loss: 2.0751 - accuracy: 0.3007 - val_loss: 1.8637 - val_accuracy: 0.4660
Epoch 3/20
63/63 [==============================] - 0s 6ms/step - loss: 1.5343 - accuracy: 0.6617 - val_loss: 1.5038 - val_accuracy: 0.6870
Epoch 4/20
63/63 [==============================] - 0s 6ms/step - loss: 1.2025 - accuracy: 0.7249 - val_loss: 1.3627 - val_accuracy: 0.6900
Epoch 5/20
63/63 [==============================] - 0s 6ms/step - loss: 1.0355 - accuracy: 0.7504 - val_loss: 1.3007 - val_accuracy: 0.7170
Epoch 6/20
63/63 [==============================] - 0s 6ms/step - loss: 0.9275 - accuracy: 0.7765 - val_loss: 1.3310 - val_accuracy: 0.7100
Epoch 7/20
63/63 [==============================] - 0s 6ms/step - loss: 0.8488 - accuracy: 0.7890 - val_loss: 1.3078 - val_accuracy: 0.7090
Epoch 8/20
63/63 [==============================] - 0s 6ms/step - loss: 0.7840 - accuracy: 0.8009 - val_loss: 1.3402 - val_accuracy: 0.7180
Epoch 9/20
63/63 [==============================] - 0s 6ms/step - loss: 0.7309 - accuracy: 0.8152 - val_loss: 1.3557 - val_accuracy: 0.7200
Epoch 10/20
63/63 [==============================] - 0s 6ms/step - loss: 0.6826 - accuracy: 0.8267 - val_loss: 1.4088 - val_accuracy: 0.7190
Epoch 11/20
63/63 [==============================] - 0s 6ms/step - loss: 0.6414 - accuracy: 0.8364 - val_loss: 1.4341 - val_accuracy: 0.7290
Epoch 12/20
63/63 [==============================] - 0s 6ms/step - loss: 0.6070 - accuracy: 0.8435 - val_loss: 1.5192 - val_accuracy: 0.7150
Epoch 13/20
63/63 [==============================] - 0s 6ms/step - loss: 0.5757 - accuracy: 0.8489 - val_loss: 1.5335 - val_accuracy: 0.7230
Epoch 14/20
63/63 [==============================] - 0s 6ms/step - loss: 0.5477 - accuracy: 0.8512 - val_loss: 1.6077 - val_accuracy: 0.7180
Epoch 15/20
63/63 [==============================] - 0s 6ms/step - loss: 0.5239 - accuracy: 0.8555 - val_loss: 1.6435 - val_accuracy: 0.7150
Epoch 16/20
63/63 [==============================] - 0s 7ms/step - loss: 0.4993 - accuracy: 0.8601 - val_loss: 1.6814 - val_accuracy: 0.7080
Epoch 17/20
63/63 [==============================] - 0s 6ms/step - loss: 0.4799 - accuracy: 0.8627 - val_loss: 1.7399 - val_accuracy: 0.7220
Epoch 18/20
63/63 [==============================] - 0s 5ms/step - loss: 0.4633 - accuracy: 0.8673 - val_loss: 1.8072 - val_accuracy: 0.7190
Epoch 19/20
63/63 [==============================] - 0s 5ms/step - loss: 0.4452 - accuracy: 0.8678 - val_loss: 1.8540 - val_accuracy: 0.7240
Epoch 20/20
63/63 [==============================] - 0s 5ms/step - loss: 0.4325 - accuracy: 0.8736 - val_loss: 1.9312 - val_accuracy: 0.7210
###Markdown
Classifying newswires: a multi-class classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----In the previous section we saw how to classify vector inputs into two mutually exclusive classes using a densely-connected neural network. But what happens when you have more than two classes? In this section, we will build a network to classify Reuters newswires into 46 different mutually-exclusive topics. Since we have many classes, this problem is an instance of "multi-class classification", and since each data point should be classified into only one category, the problem is more specifically an instance of "single-label, multi-class classification". If each data point could have belonged to multiple categories (in our case, topics) then we would be facing a "multi-label, multi-class classification" problem. The Reuters datasetWe will be working with the _Reuters dataset_, a set of short newswires and their topics, published by Reuters in 1986. It's a very simple, widely used toy dataset for text classification. There are 46 different topics; some topics are more represented than others, but each topic has at least 10 examples in the training set.Like IMDB and MNIST, the Reuters dataset comes packaged as part of Keras. Let's take a look right away:
###Code
from keras.datasets import reuters
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Like with the IMDB dataset, the argument `num_words=10000` restricts the data to the 10,000 most frequently occurring words found in the data.We have 8,982 training examples and 2,246 test examples:
###Code
len(train_data)
len(test_data)
###Output
_____no_output_____
###Markdown
As with the IMDB reviews, each example is a list of integers (word indices):
###Code
train_data[10]
###Output
_____no_output_____
###Markdown
Here's how you can decode it back to words, in case you are curious:
###Code
word_index = reuters.get_word_index()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# Note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_newswire
###Output
_____no_output_____
###Markdown
The label associated with an example is an integer between 0 and 45: a topic index.
###Code
train_labels[10]
###Output
_____no_output_____
###Markdown
Preparing the dataWe can vectorize the data with the exact same code as in our previous example:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
To vectorize the labels, there are two possibilities: we could just cast the label list as an integer tensor, or we could use a "one-hot" encoding. One-hot encoding is a widely used format for categorical data, also called "categorical encoding". For a more detailed explanation of one-hot encoding, you can refer to Chapter 6, Section 1. In our case, one-hot encoding of our labels consists in embedding each label as an all-zero vector with a 1 in the place of the label index, e.g.:
###Code
def to_one_hot(labels, dimension=46):
results = np.zeros((len(labels), dimension))
for i, label in enumerate(labels):
results[i, label] = 1.
return results
# Our vectorized training labels
one_hot_train_labels = to_one_hot(train_labels)
# Our vectorized test labels
one_hot_test_labels = to_one_hot(test_labels)
###Output
_____no_output_____
###Markdown
Note that there is a built-in way to do this in Keras, which you have already seen in action in our MNIST example:
###Code
from keras.utils.np_utils import to_categorical
one_hot_train_labels = to_categorical(train_labels)
one_hot_test_labels = to_categorical(test_labels)
###Output
_____no_output_____
###Markdown
Building our networkThis topic classification problem looks very similar to our previous movie review classification problem: in both cases, we are trying to classify short snippets of text. There is however a new constraint here: the number of output classes has gone from 2 to 46, i.e. the dimensionality of the output space is much larger. In a stack of `Dense` layers like what we were using, each layer can only access information present in the output of the previous layer. If one layer drops some information relevant to the classification problem, this information can never be recovered by later layers: each layer can potentially become an "information bottleneck". In our previous example, we were using 16-dimensional intermediate layers, but a 16-dimensional space may be too limited to learn to separate 46 different classes: such small layers may act as information bottlenecks, permanently dropping relevant information.For this reason we will use larger layers. Let's go with 64 units:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
###Output
_____no_output_____
###Markdown
There are two other things you should note about this architecture:* We are ending the network with a `Dense` layer of size 46. This means that for each input sample, our network will output a 46-dimensional vector. Each entry in this vector (each dimension) will encode a different output class.* The last layer uses a `softmax` activation. You have already seen this pattern in the MNIST example. It means that the network will output a _probability distribution_ over the 46 different output classes, i.e. for every input sample, the network will produce a 46-dimensional output vector where `output[i]` is the probability that the sample belongs to class `i`. The 46 scores will sum to 1.The best loss function to use in this case is `categorical_crossentropy`. It measures the distance between two probability distributions: in our case, between the probability distribution output by our network, and the true distribution of the labels. By minimizing the distance between these two distributions, we train our network to output something as close as possible to the true labels.
###Code
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Validating our approachLet's set apart 1,000 samples in our training data to use as a validation set:
###Code
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = one_hot_train_labels[:1000]
partial_y_train = one_hot_train_labels[1000:]
###Output
_____no_output_____
###Markdown
Now let's train our network for 20 epochs:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Epoch 1/20
1/16 [>.............................] - ETA: 3s - loss: 3.8327 - accuracy: 0.0098
###Markdown
Let's display its loss and accuracy curves:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
It seems that the network starts overfitting after 8 epochs. Let's train a new network from scratch for 8 epochs, then let's evaluate it on the test set:
###Code
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=8,
batch_size=512,
validation_data=(x_val, y_val))
results = model.evaluate(x_test, one_hot_test_labels)
results
###Output
_____no_output_____
###Markdown
Our approach reaches an accuracy of ~78%. With a balanced binary classification problem, the accuracy reached by a purely random classifier would be 50%, but in our case it is closer to 19%, so our results seem pretty good, at least when compared to a random baseline:
###Code
import copy
test_labels_copy = copy.copy(test_labels)
np.random.shuffle(test_labels_copy)
float(np.sum(np.array(test_labels) == np.array(test_labels_copy))) / len(test_labels)
###Output
_____no_output_____
###Markdown
Generating predictions on new dataWe can verify that the `predict` method of our model instance returns a probability distribution over all 46 topics. Let's generate topic predictions for all of the test data:
###Code
predictions = model.predict(x_test)
###Output
2022-03-24 13:55:56.125570: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
###Markdown
Each entry in `predictions` is a vector of length 46:
###Code
predictions[0].shape
###Output
_____no_output_____
###Markdown
The coefficients in this vector sum to 1:
###Code
np.sum(predictions[0])
###Output
_____no_output_____
###Markdown
The largest entry is the predicted class, i.e. the class with the highest probability:
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
A different way to handle the labels and the lossWe mentioned earlier that another way to encode the labels would be to cast them as an integer tensor, like such:
###Code
y_train = np.array(train_labels)
y_test = np.array(test_labels)
###Output
_____no_output_____
###Markdown
The only thing it would change is the choice of the loss function. Our previous loss, `categorical_crossentropy`, expects the labels to follow a categorical encoding. With integer labels, we should use `sparse_categorical_crossentropy`:
###Code
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc'])
###Output
_____no_output_____
###Markdown
This new loss function is still mathematically the same as `categorical_crossentropy`; it just has a different interface. On the importance of having sufficiently large intermediate layersWe mentioned earlier that since our final outputs were 46-dimensional, we should avoid intermediate layers with much less than 46 hidden units. Now let's try to see what happens when we introduce an information bottleneck by having intermediate layers significantly less than 46-dimensional, e.g. 4-dimensional.
###Code
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(4, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=128,
validation_data=(x_val, y_val))
###Output
Epoch 1/20
###Markdown
Classifying newswires: a multi-class classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----In the previous section we saw how to classify vector inputs into two mutually exclusive classes using a densely-connected neural network. But what happens when you have more than two classes? In this section, we will build a network to classify Reuters newswires into 46 different mutually-exclusive topics. Since we have many classes, this problem is an instance of "multi-class classification", and since each data point should be classified into only one category, the problem is more specifically an instance of "single-label, multi-class classification". If each data point could have belonged to multiple categories (in our case, topics) then we would be facing a "multi-label, multi-class classification" problem. The Reuters datasetWe will be working with the _Reuters dataset_, a set of short newswires and their topics, published by Reuters in 1986. It's a very simple, widely used toy dataset for text classification. There are 46 different topics; some topics are more represented than others, but each topic has at least 10 examples in the training set.Like IMDB and MNIST, the Reuters dataset comes packaged as part of Keras. Let's take a look right away:
###Code
from keras.datasets import reuters
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Like with the IMDB dataset, the argument `num_words=10000` restricts the data to the 10,000 most frequently occurring words found in the data.We have 8,982 training examples and 2,246 test examples:
###Code
len(train_data)
len(test_data)
###Output
_____no_output_____
###Markdown
As with the IMDB reviews, each example is a list of integers (word indices):
###Code
train_data[10]
###Output
_____no_output_____
###Markdown
Here's how you can decode it back to words, in case you are curious:
###Code
word_index = reuters.get_word_index()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# Note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_newswire
###Output
_____no_output_____
###Markdown
The label associated with an example is an integer between 0 and 45: a topic index.
###Code
train_labels[10]
###Output
_____no_output_____
###Markdown
Preparing the dataWe can vectorize the data with the exact same code as in our previous example:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
To vectorize the labels, there are two possibilities: we could just cast the label list as an integer tensor, or we could use a "one-hot" encoding. One-hot encoding is a widely used format for categorical data, also called "categorical encoding". For a more detailed explanation of one-hot encoding, you can refer to Chapter 6, Section 1. In our case, one-hot encoding of our labels consists in embedding each label as an all-zero vector with a 1 in the place of the label index, e.g.:
###Code
def to_one_hot(labels, dimension=46):
results = np.zeros((len(labels), dimension))
for i, label in enumerate(labels):
results[i, label] = 1.
return results
# Our vectorized training labels
one_hot_train_labels = to_one_hot(train_labels)
# Our vectorized test labels
one_hot_test_labels = to_one_hot(test_labels)
###Output
_____no_output_____
###Markdown
Note that there is a built-in way to do this in Keras, which you have already seen in action in our MNIST example:
###Code
from keras.utils.np_utils import to_categorical
one_hot_train_labels = to_categorical(train_labels)
one_hot_test_labels = to_categorical(test_labels)
###Output
_____no_output_____
###Markdown
Building our networkThis topic classification problem looks very similar to our previous movie review classification problem: in both cases, we are trying to classify short snippets of text. There is however a new constraint here: the number of output classes has gone from 2 to 46, i.e. the dimensionality of the output space is much larger. In a stack of `Dense` layers like what we were using, each layer can only access information present in the output of the previous layer. If one layer drops some information relevant to the classification problem, this information can never be recovered by later layers: each layer can potentially become an "information bottleneck". In our previous example, we were using 16-dimensional intermediate layers, but a 16-dimensional space may be too limited to learn to separate 46 different classes: such small layers may act as information bottlenecks, permanently dropping relevant information.For this reason we will use larger layers. Let's go with 64 units:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
###Output
_____no_output_____
###Markdown
There are two other things you should note about this architecture:* We are ending the network with a `Dense` layer of size 46. This means that for each input sample, our network will output a 46-dimensional vector. Each entry in this vector (each dimension) will encode a different output class.* The last layer uses a `softmax` activation. You have already seen this pattern in the MNIST example. It means that the network will output a _probability distribution_ over the 46 different output classes, i.e. for every input sample, the network will produce a 46-dimensional output vector where `output[i]` is the probability that the sample belongs to class `i`. The 46 scores will sum to 1.The best loss function to use in this case is `categorical_crossentropy`. It measures the distance between two probability distributions: in our case, between the probability distribution output by our network, and the true distribution of the labels. By minimizing the distance between these two distributions, we train our network to output something as close as possible to the true labels.
###Code
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Validating our approachLet's set apart 1,000 samples in our training data to use as a validation set:
###Code
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = one_hot_train_labels[:1000]
partial_y_train = one_hot_train_labels[1000:]
###Output
_____no_output_____
###Markdown
Now let's train our network for 20 epochs:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 7982 samples, validate on 1000 samples
Epoch 1/20
7982/7982 [==============================] - 1s - loss: 2.5241 - acc: 0.4952 - val_loss: 1.7263 - val_acc: 0.6100
Epoch 2/20
7982/7982 [==============================] - 0s - loss: 1.4500 - acc: 0.6854 - val_loss: 1.3478 - val_acc: 0.7070
Epoch 3/20
7982/7982 [==============================] - 0s - loss: 1.0979 - acc: 0.7643 - val_loss: 1.1736 - val_acc: 0.7460
Epoch 4/20
7982/7982 [==============================] - 0s - loss: 0.8723 - acc: 0.8178 - val_loss: 1.0880 - val_acc: 0.7490
Epoch 5/20
7982/7982 [==============================] - 0s - loss: 0.7045 - acc: 0.8477 - val_loss: 0.9822 - val_acc: 0.7760
Epoch 6/20
7982/7982 [==============================] - 0s - loss: 0.5660 - acc: 0.8792 - val_loss: 0.9379 - val_acc: 0.8030
Epoch 7/20
7982/7982 [==============================] - 0s - loss: 0.4569 - acc: 0.9037 - val_loss: 0.9039 - val_acc: 0.8050
Epoch 8/20
7982/7982 [==============================] - 0s - loss: 0.3668 - acc: 0.9238 - val_loss: 0.9279 - val_acc: 0.7890
Epoch 9/20
7982/7982 [==============================] - 0s - loss: 0.3000 - acc: 0.9326 - val_loss: 0.8835 - val_acc: 0.8070
Epoch 10/20
7982/7982 [==============================] - 0s - loss: 0.2505 - acc: 0.9434 - val_loss: 0.8967 - val_acc: 0.8150
Epoch 11/20
7982/7982 [==============================] - 0s - loss: 0.2155 - acc: 0.9473 - val_loss: 0.9080 - val_acc: 0.8110
Epoch 12/20
7982/7982 [==============================] - 0s - loss: 0.1853 - acc: 0.9506 - val_loss: 0.9025 - val_acc: 0.8140
Epoch 13/20
7982/7982 [==============================] - 0s - loss: 0.1680 - acc: 0.9524 - val_loss: 0.9268 - val_acc: 0.8100
Epoch 14/20
7982/7982 [==============================] - 0s - loss: 0.1512 - acc: 0.9562 - val_loss: 0.9500 - val_acc: 0.8130
Epoch 15/20
7982/7982 [==============================] - 0s - loss: 0.1371 - acc: 0.9559 - val_loss: 0.9621 - val_acc: 0.8090
Epoch 16/20
7982/7982 [==============================] - 0s - loss: 0.1306 - acc: 0.9553 - val_loss: 1.0152 - val_acc: 0.8050
Epoch 17/20
7982/7982 [==============================] - 0s - loss: 0.1210 - acc: 0.9575 - val_loss: 1.0262 - val_acc: 0.8010
Epoch 18/20
7982/7982 [==============================] - 0s - loss: 0.1185 - acc: 0.9570 - val_loss: 1.0354 - val_acc: 0.8040
Epoch 19/20
7982/7982 [==============================] - 0s - loss: 0.1128 - acc: 0.9598 - val_loss: 1.0841 - val_acc: 0.8010
Epoch 20/20
7982/7982 [==============================] - 0s - loss: 0.1097 - acc: 0.9594 - val_loss: 1.0707 - val_acc: 0.8040
###Markdown
Let's display its loss and accuracy curves:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc = history.history['acc']
val_acc = history.history['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
It seems that the network starts overfitting after 8 epochs. Let's train a new network from scratch for 8 epochs, then let's evaluate it on the test set:
###Code
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=8,
batch_size=512,
validation_data=(x_val, y_val))
results = model.evaluate(x_test, one_hot_test_labels)
results
###Output
_____no_output_____
###Markdown
Our approach reaches an accuracy of ~78%. With a balanced binary classification problem, the accuracy reached by a purely random classifier would be 50%, but in our case it is closer to 19%, so our results seem pretty good, at least when compared to a random baseline:
###Code
import copy
test_labels_copy = copy.copy(test_labels)
np.random.shuffle(test_labels_copy)
float(np.sum(np.array(test_labels) == np.array(test_labels_copy))) / len(test_labels)
###Output
_____no_output_____
###Markdown
Generating predictions on new dataWe can verify that the `predict` method of our model instance returns a probability distribution over all 46 topics. Let's generate topic predictions for all of the test data:
###Code
predictions = model.predict(x_test)
###Output
_____no_output_____
###Markdown
Each entry in `predictions` is a vector of length 46:
###Code
predictions[0].shape
###Output
_____no_output_____
###Markdown
The coefficients in this vector sum to 1:
###Code
np.sum(predictions[0])
###Output
_____no_output_____
###Markdown
The largest entry is the predicted class, i.e. the class with the highest probability:
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
A different way to handle the labels and the lossWe mentioned earlier that another way to encode the labels would be to cast them as an integer tensor, like such:
###Code
y_train = np.array(train_labels)
y_test = np.array(test_labels)
###Output
_____no_output_____
###Markdown
The only thing it would change is the choice of the loss function. Our previous loss, `categorical_crossentropy`, expects the labels to follow a categorical encoding. With integer labels, we should use `sparse_categorical_crossentropy`:
###Code
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc'])
###Output
_____no_output_____
###Markdown
This new loss function is still mathematically the same as `categorical_crossentropy`; it just has a different interface. On the importance of having sufficiently large intermediate layersWe mentioned earlier that since our final outputs were 46-dimensional, we should avoid intermediate layers with much less than 46 hidden units. Now let's try to see what happens when we introduce an information bottleneck by having intermediate layers significantly less than 46-dimensional, e.g. 4-dimensional.
###Code
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(4, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=128,
validation_data=(x_val, y_val))
###Output
Train on 7982 samples, validate on 1000 samples
Epoch 1/20
7982/7982 [==============================] - 0s - loss: 3.1620 - acc: 0.2295 - val_loss: 2.6750 - val_acc: 0.2740
Epoch 2/20
7982/7982 [==============================] - 0s - loss: 2.2009 - acc: 0.3829 - val_loss: 1.7626 - val_acc: 0.5990
Epoch 3/20
7982/7982 [==============================] - 0s - loss: 1.4490 - acc: 0.6486 - val_loss: 1.4738 - val_acc: 0.6390
Epoch 4/20
7982/7982 [==============================] - 0s - loss: 1.2258 - acc: 0.6776 - val_loss: 1.3961 - val_acc: 0.6570
Epoch 5/20
7982/7982 [==============================] - 0s - loss: 1.0886 - acc: 0.7032 - val_loss: 1.3727 - val_acc: 0.6700
Epoch 6/20
7982/7982 [==============================] - 0s - loss: 0.9817 - acc: 0.7494 - val_loss: 1.3682 - val_acc: 0.6800
Epoch 7/20
7982/7982 [==============================] - 0s - loss: 0.8937 - acc: 0.7757 - val_loss: 1.3587 - val_acc: 0.6810
Epoch 8/20
7982/7982 [==============================] - 0s - loss: 0.8213 - acc: 0.7942 - val_loss: 1.3548 - val_acc: 0.6960
Epoch 9/20
7982/7982 [==============================] - 0s - loss: 0.7595 - acc: 0.8088 - val_loss: 1.3883 - val_acc: 0.7050
Epoch 10/20
7982/7982 [==============================] - 0s - loss: 0.7072 - acc: 0.8193 - val_loss: 1.4216 - val_acc: 0.7020
Epoch 11/20
7982/7982 [==============================] - 0s - loss: 0.6642 - acc: 0.8254 - val_loss: 1.4405 - val_acc: 0.7020
Epoch 12/20
7982/7982 [==============================] - 0s - loss: 0.6275 - acc: 0.8281 - val_loss: 1.4938 - val_acc: 0.7080
Epoch 13/20
7982/7982 [==============================] - 0s - loss: 0.5915 - acc: 0.8353 - val_loss: 1.5301 - val_acc: 0.7110
Epoch 14/20
7982/7982 [==============================] - 0s - loss: 0.5637 - acc: 0.8419 - val_loss: 1.5400 - val_acc: 0.7080
Epoch 15/20
7982/7982 [==============================] - 0s - loss: 0.5389 - acc: 0.8523 - val_loss: 1.5826 - val_acc: 0.7090
Epoch 16/20
7982/7982 [==============================] - 0s - loss: 0.5162 - acc: 0.8588 - val_loss: 1.6391 - val_acc: 0.7080
Epoch 17/20
7982/7982 [==============================] - 0s - loss: 0.4950 - acc: 0.8623 - val_loss: 1.6469 - val_acc: 0.7060
Epoch 18/20
7982/7982 [==============================] - 0s - loss: 0.4771 - acc: 0.8670 - val_loss: 1.7258 - val_acc: 0.6950
Epoch 19/20
7982/7982 [==============================] - 0s - loss: 0.4562 - acc: 0.8718 - val_loss: 1.7667 - val_acc: 0.6930
Epoch 20/20
7982/7982 [==============================] - 0s - loss: 0.4428 - acc: 0.8742 - val_loss: 1.7785 - val_acc: 0.7060
|
DJFnotebooks/DJFgpsro.ipynb | ###Markdown
Import Occultation Data
###Code
# Import occultation data
december_year_info = np.load('/usb/monthly_diurnal_cycle_data_occultations/december_cosmic_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
january_year_info = np.load('/usb/monthly_diurnal_cycle_data_occultations/january_cosmic_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
february_year_info = np.load('/usb/monthly_diurnal_cycle_data_occultations/february_cosmic_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
december_year_info_metopa = np.load('/usb/monthly_diurnal_cycle_data_occultations/december_metop_A_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
january_year_info_metopa = np.load('/usb/monthly_diurnal_cycle_data_occultations/january_metop_A_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
february_year_info_metopa = np.load('/usb/monthly_diurnal_cycle_data_occultations/february_metop_A_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
december_year_info_metopb = np.load('/usb/monthly_diurnal_cycle_data_occultations/december_metop_B_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
january_year_info_metopb = np.load('/usb/monthly_diurnal_cycle_data_occultations/january_metop_B_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
february_year_info_metopb = np.load('/usb/monthly_diurnal_cycle_data_occultations/february_metop_B_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
december_year_info_grace = np.load('/usb/monthly_diurnal_cycle_data_occultations/december_grace_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
january_year_info_grace = np.load('/usb/monthly_diurnal_cycle_data_occultations/january_grace_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
february_year_info_grace = np.load('/usb/monthly_diurnal_cycle_data_occultations/february_grace_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
december_year_info_tsx = np.load('/usb/monthly_diurnal_cycle_data_occultations/december_tsx_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
january_year_info_tsx = np.load('/usb/monthly_diurnal_cycle_data_occultations/january_tsx_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
february_year_info_tsx = np.load('/usb/monthly_diurnal_cycle_data_occultations/february_tsx_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
december_year_info_kompsat5 = np.load('/usb/monthly_diurnal_cycle_data_occultations/december_kompsat5_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
january_year_info_kompsat5 = np.load('/usb/monthly_diurnal_cycle_data_occultations/january_kompsat5_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
february_year_info_kompsat5 = np.load('/usb/monthly_diurnal_cycle_data_occultations/february_kompsat5_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
december_year_info_paz = np.load('/usb/monthly_diurnal_cycle_data_occultations/december_paz_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
january_year_info_paz = np.load('/usb/monthly_diurnal_cycle_data_occultations/january_paz_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
february_year_info_paz = np.load('/usb/monthly_diurnal_cycle_data_occultations/february_paz_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
december_year_info_cosmic2 = np.load('/usb/monthly_diurnal_cycle_data_occultations/december_cosmic2_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
january_year_info_cosmic2 = np.load('/usb/monthly_diurnal_cycle_data_occultations/january_cosmic2_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
february_year_info_cosmic2 = np.load('/usb/monthly_diurnal_cycle_data_occultations/february_cosmic2_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
december_year_info_sacc = np.load('/usb/monthly_diurnal_cycle_data_occultations/december_sacc_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
january_year_info_sacc = np.load('/usb/monthly_diurnal_cycle_data_occultations/january_sacc_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
february_year_info_sacc = np.load('/usb/monthly_diurnal_cycle_data_occultations/february_sacc_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
december_year_info_tdx = np.load('/usb/monthly_diurnal_cycle_data_occultations/december_tdx_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
january_year_info_tdx = np.load('/usb/monthly_diurnal_cycle_data_occultations/january_tdx_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
february_year_info_tdx = np.load('/usb/monthly_diurnal_cycle_data_occultations/february_tdx_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
december_year_info_metopc = np.load('/usb/monthly_diurnal_cycle_data_occultations/december_metopc_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
january_year_info_metopc = np.load('/usb/monthly_diurnal_cycle_data_occultations/january_metopc_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
february_year_info_metopc = np.load('/usb/monthly_diurnal_cycle_data_occultations/february_metopc_diurnal_cycles_TLS_year_info.npy', allow_pickle=True)
jan_data = np.concatenate((january_year_info.T, january_year_info_metopa.T, january_year_info_metopb.T,
january_year_info_grace.T, january_year_info_tsx.T, january_year_info_kompsat5.T,
january_year_info_cosmic2.T, january_year_info_paz.T, january_year_info_sacc.T,
january_year_info_tdx.T, january_year_info_metopc.T))
feb_data = np.concatenate((february_year_info.T, february_year_info_metopa.T, february_year_info_metopb.T,
february_year_info_grace.T, february_year_info_tsx.T, february_year_info_kompsat5.T,
february_year_info_cosmic2.T, february_year_info_paz.T, february_year_info_sacc.T,
february_year_info_tdx.T, february_year_info_metopc.T))
dec_data = np.concatenate((december_year_info.T, december_year_info_metopa.T, december_year_info_metopb.T,
december_year_info_grace.T, december_year_info_tsx.T, december_year_info_kompsat5.T,
december_year_info_cosmic2.T, december_year_info_paz.T, december_year_info_sacc.T,
december_year_info_tdx.T, december_year_info_metopc.T))
#create dataframes for season
jan_year_info_df = pd.DataFrame(jan_data, columns=['Lat', 'Lon', 'Year', 'Day', 'Hour', 'Temp'])
feb_year_info_df = pd.DataFrame(feb_data, columns=['Lat', 'Lon', 'Year', 'Day', 'Hour', 'Temp'])
dec_year_info_df = pd.DataFrame(dec_data, columns=['Lat', 'Lon', 'Year', 'Day', 'Hour', 'Temp'])
###Output
_____no_output_____
###Markdown
Import ERA-5 Data
###Code
era_5_jan_07_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2007_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_08_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2008_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_09_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2009_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_10_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2010_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_11_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2011_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_12_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2012_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_13_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2013_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_14_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2014_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_15_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2015_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_16_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2016_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_17_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2017_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_18_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2018_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_19_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2019_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_20_5x10 = np.load('../../ERA_5_monthly_TLS_maps/january_2020_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_07_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2007_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_08_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2008_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_09_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2009_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_10_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2010_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_11_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2011_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_12_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2012_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_13_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2013_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_14_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2014_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_15_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2015_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_16_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2016_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_17_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2017_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_18_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2018_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_19_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2019_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_feb_20_5x10 = np.load('../../ERA_5_monthly_TLS_maps/february_2020_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_06_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2006_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_07_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2007_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_08_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2008_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_09_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2009_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_10_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2010_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_11_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2011_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_12_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2012_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_13_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2013_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_14_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2014_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_15_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2015_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_16_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2016_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_17_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2017_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_18_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2018_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_dec_19_5x10 = np.load('../../ERA_5_monthly_TLS_maps/december_2019_ERA_5_daily_zonal_mean_TLS_map_5_10.npy', allow_pickle=True)
era_5_jan_07_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_07_5x10)
era_5_jan_08_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_08_5x10)
era_5_jan_09_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_09_5x10)
era_5_jan_10_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_10_5x10)
era_5_jan_11_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_11_5x10)
era_5_jan_12_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_12_5x10)
era_5_jan_13_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_13_5x10)
era_5_jan_14_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_14_5x10)
era_5_jan_15_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_15_5x10)
era_5_jan_16_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_16_5x10)
era_5_jan_17_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_17_5x10)
era_5_jan_18_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_18_5x10)
era_5_jan_19_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_19_5x10)
era_5_jan_20_5x10_df = gpsro_tools.era5_df_switcher(era_5_jan_20_5x10)
era_5_feb_07_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_07_5x10)
era_5_feb_08_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_08_5x10)
era_5_feb_09_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_09_5x10)
era_5_feb_10_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_10_5x10)
era_5_feb_11_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_11_5x10)
era_5_feb_12_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_12_5x10)
era_5_feb_13_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_13_5x10)
era_5_feb_14_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_14_5x10)
era_5_feb_15_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_15_5x10)
era_5_feb_16_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_16_5x10)
era_5_feb_17_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_17_5x10)
era_5_feb_18_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_18_5x10)
era_5_feb_19_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_19_5x10)
era_5_feb_20_5x10_df = gpsro_tools.era5_df_switcher(era_5_feb_20_5x10)
era_5_dec_06_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_06_5x10)
era_5_dec_07_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_07_5x10)
era_5_dec_08_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_08_5x10)
era_5_dec_09_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_09_5x10)
era_5_dec_10_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_10_5x10)
era_5_dec_11_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_11_5x10)
era_5_dec_12_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_12_5x10)
era_5_dec_13_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_13_5x10)
era_5_dec_14_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_14_5x10)
era_5_dec_15_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_15_5x10)
era_5_dec_16_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_16_5x10)
era_5_dec_17_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_17_5x10)
era_5_dec_18_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_18_5x10)
era_5_dec_19_5x10_df = gpsro_tools.era5_df_switcher(era_5_dec_19_5x10)
era_5_jan_df = pd.concat([era_5_jan_07_5x10_df, era_5_jan_08_5x10_df, era_5_jan_09_5x10_df, era_5_jan_10_5x10_df,
era_5_jan_11_5x10_df, era_5_jan_12_5x10_df, era_5_jan_13_5x10_df, era_5_jan_14_5x10_df,
era_5_jan_15_5x10_df, era_5_jan_16_5x10_df, era_5_jan_17_5x10_df, era_5_jan_18_5x10_df,
era_5_jan_19_5x10_df, era_5_jan_20_5x10_df])
era_5_feb_df = pd.concat([era_5_feb_07_5x10_df, era_5_feb_08_5x10_df, era_5_feb_09_5x10_df, era_5_feb_10_5x10_df,
era_5_feb_11_5x10_df, era_5_feb_12_5x10_df, era_5_feb_13_5x10_df, era_5_feb_14_5x10_df,
era_5_feb_15_5x10_df, era_5_feb_16_5x10_df, era_5_feb_17_5x10_df, era_5_feb_18_5x10_df,
era_5_feb_19_5x10_df, era_5_feb_20_5x10_df])
era_5_dec_df = pd.concat([era_5_dec_06_5x10_df, era_5_dec_07_5x10_df, era_5_dec_08_5x10_df, era_5_dec_09_5x10_df,
era_5_dec_10_5x10_df, era_5_dec_11_5x10_df, era_5_dec_12_5x10_df, era_5_dec_13_5x10_df,
era_5_dec_14_5x10_df, era_5_dec_15_5x10_df, era_5_dec_16_5x10_df, era_5_dec_17_5x10_df,
era_5_dec_18_5x10_df, era_5_dec_19_5x10_df])
###Output
/home/disk/p/aodhan/cosmic/diurnal_cycle_corrections/sampling_biases/TLS_diurnal_cycles/gpsro_tools.py:152: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
era_5_df_pos_lons['Lon'] = era_5_df_pos_lons['Lon'] - 180.
/home/disk/p/aodhan/cosmic/diurnal_cycle_corrections/sampling_biases/TLS_diurnal_cycles/gpsro_tools.py:153: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
era_5_df_neg_lons['Lon'] = era_5_df_neg_lons['Lon'] + 180
###Markdown
Begin processing
###Code
daily_era5_removed_and_bias_removed_jan = gpsro_tools.background_and_bias_remover(jan_year_info_df, era_5_jan_df)
daily_era5_removed_and_bias_removed_feb = gpsro_tools.background_and_bias_remover(feb_year_info_df, era_5_feb_df)
daily_era5_removed_and_bias_removed_dec = gpsro_tools.background_and_bias_remover(dec_year_info_df, era_5_dec_df)
###Output
2006
2007
###Markdown
Regroup data and drop NaN values
###Code
data_all_mean_stuff_removed = pd.concat([daily_era5_removed_and_bias_removed_jan,
daily_era5_removed_and_bias_removed_feb,
daily_era5_removed_and_bias_removed_dec])
data_all_mean_stuff_removed.dropna(subset = ["Temp"], inplace=True)
###Output
_____no_output_____
###Markdown
Now create the diurnal cycles
###Code
cleaned_diurnal_cycle_data = gpsro_tools.box_mean_remover(data_all_mean_stuff_removed)
diurnal_cycles_by_lat, diurnal_cycles_in_boxes = gpsro_tools.diurnal_binner(cleaned_diurnal_cycle_data)
np.save('DJF_GPSRO_5_10_boxes_diurnal_cycles_test', diurnal_cycles_in_boxes)
###Output
_____no_output_____ |
week1/Intro_To_DL_W1.ipynb | ###Markdown
###Code
#Library Inclusion/Import Section
%tensorflow_version 2.x
import tensorflow as tf;
import os
/
def restart_runtime():
os.kill(os.getpid(), 9)
#restart_runtime() for restarting session for version change of preloaded library
def main():
if __name__ == "__main__":
main();
###Output
_____no_output_____ |
stats/correlation.ipynb | ###Markdown
Correlation Pearson's product-moment coefficienthttps://en.wikipedia.org/wiki/Correlation_and_dependencePearson's_product-moment_coefficientInicialmente me confundí y pense que la sintaxis XY se referia a distribuciones de probabilidad conjuntas pero son temas diferentes. Formula 1 Formula 2 Ejemplo con 2 variables discretas
###Code
E = np.mean
S = np.std
X = np.array([1, 1, 0])
Y = np.array([-1, 0, 1])
## Formula 1
XY = []
for i in X - E(X):
for j in Y - E(Y):
XY.append(i*j)
numerator = E(XY)
denominator = S(X)*S(Y)
MSG = """
No hay ninguna relación entre las 2 variables: {}
"""
print(MSG.format(numerator/denominator))
## Formula 2
XY = []
for i in X:
for j in Y:
XY.append(i*j)
numerator = E(XY) - E(X)*E(Y)
denominator = (E(X**2) - E(X)**2)**0.5 * (E(Y**2) - E(Y)**2)**0.5
MSG = """
No hay ninguna relación entre las 2 variables: {}
"""
print(MSG.format(numerator/denominator))
###Output
No hay ninguna relación entre las 2 variables: 0.0
###Markdown
Correlation functionhttps://en.wikipedia.org/wiki/Correlation_functionLa función de correlación compara 2 series operando sobre la serie, esto genera una función. La cross-correlation es cuando comparas 2 señalas diferentes y la autocorrelación es cuando comparas una serie consigo misma pero desplazada. Indicador de correlación- En el caso contino calculan el valor esperado.- En el caso discreto usan la suma. Ejemplo continuo - https://www.youtube.com/watch?v=DblXnXxUQc0 El usa como ejemplo una función continua, escoge una función sinudal que depende de dos variables aleatorias:- La frecuencia y la amplitud, esto genera una función sinuidal con ruido.El desarrollo analitico requiere que las variables sean independientes Calculo de la media Calculo de la media Ejemplo discreto- https://www.youtube.com/watch?v=_r_fDlM0Dx0 - Se puede calcular el valor esperado de la funci - https://www.youtube.com/watch?v=ngEC3sXeUb4 - Muestra la formula de la correlación normalizada Las series que usa el man del video estan muy simples, la idea es usar una representación discreta de la sinuidal del ejemple continuo. Desarrollo analitico vs desarrollon númericoLas series que usa el man del video estan muy simples, la idea es usar una representación discreta de la sinuidal del ejemple continuo.
###Code
N = 1000
A = np.random.normal(0, 1, N)
Th = np.random.uniform(-np.pi, np.pi, N)
wc = 1
X = lambda t: A*np.sin(wc*t + Th)
t = np.arange(0, 6*np.pi, 6*np.pi/N)
t_ticks = np.arange(0, 6*np.pi, 6*np.pi/10)
fig = plt.figure(figsize=(15,8))
ax = fig.add_subplot(1, 1, 1)
ax.plot(t, X(t))
ax.set_xticks(t_ticks)
plt.show()
#### Autocorrelación en el momento 0 debería de ser 1
T = 0
MSG = """
Este es el calculo usando la solución analitica: {}
"""
corr = 0.5*np.mean(A**2)*np.cos(wc*T)
print(MSG.format(corr))
MSG = """
Este es el calculo usando la solución númerica: {}
"""
nominator = np.mean( X(t)*X(t+T) )
print(MSG.format(nominator))
"""
El valor esperado del desarrollo analitico y númerico estan próximos.
"""
###Output
Este es el calculo usando la solución analitica: 0.45936869278035647
Este es el calculo usando la solución númerica: 0.4497684459967031
###Markdown
Cambiar función de agregaciónEn el ejemplo anterior use la media para comparar el caso continuo con el discreto, me gustaría en vez de usar elvalor esperado probar con la sumatoria de la función para poder aplicar el mismo coeficiente de correlación del caso discreto con variables continuas La función analitica queda así$\int{A^2}$ · $\left( \dfrac{t}{2}cos(T wc) - \dfrac{sin(2\theta + wc(2t +T))}{4wc}\right)$Debido a que T es 0$\int{A^2}$ · $\left( \dfrac{t}{2} - \dfrac{sin(2\theta + wc(2t))}{4wc}\right)$Debido a que wc es 1$\int{A^2}$ · $\left( \dfrac{t}{2} - \dfrac{sin(2\theta + 2t)}{4}\right)$$\int{A^2}$ · $\left( \dfrac{t}{2} - \dfrac{sin(2(t + \theta))}{4}\right)$
###Code
exp_1 = np.sum(A**2)
exp_2 = t[-1]/2
## Metí elpromedio porque no se como reducir el teta luego de haber integrado
exp_3 = np.mean(np.sin(2*(t[-1] + Th))/4)
MSG = """
Los resultados estan muy alejados, estoy haciendo algo mal con la variable aleatoria {}
"""
print(MSG.format((exp_1 * (exp_2 - exp_3))))
MSG = """
Este es el calculo discreto de la función con el ruido aleatorio {} el resultados es 10 veces menor que el obtenido por la vía analitica.
"""
nominator = np.sum( X(t)*X(t+T) )
print(MSG.format(nominator))
###Output
Los resultados estan muy alejados, estoy haciendo algo mal con la variable aleatoria 8649.074678436786
Este es el calculo discreto de la función con el ruido aleatorio 449.7684459967031 el resultados es 10 veces menor que el obtenido por la vía analitica.
###Markdown
Verificar usando una función mas simpleVoy a usar una recta con pendiente 1 para empezar
###Code
N = 1000
A = np.random.normal(0, 1, N)
wc = 1
X = lambda t: [A[i]*wc*t[i] for i in range(t.shape[0])]
t = np.arange(0, 6*np.pi, 6*np.pi/N)
t_ticks = np.arange(0, 6*np.pi, 6*np.pi/10)
fig = plt.figure(figsize=(15,8))
ax = fig.add_subplot(1, 1, 1)
ax.plot(t, X(t))
ax.set_xticks(t_ticks)
plt.show()
#### Autocorrelación: Gráfica
#### Las funciones solo se parecen cuando tau es 0
T = 1
## Esta calculando la integral discreta de la función de correlación
discrete_corr = np.array([ X(t)[i] * X(t)[i+T] for i in range(t.shape[0] - T) ])
## Al momento de modelarlo tengo que entender la variable aleatoria como si fueran 2
continuous_corr = np.array([ A[i+T]* A[i] * wc**2 * t[i]*t[i + T] for i in range(t.shape[0] - T) ])
fig = plt.figure(figsize=(15,8))
ax = fig.add_subplot(1, 2, 1)
ax.plot(t[T:], discrete_corr)
ax.set_xticks(t_ticks)
ax = fig.add_subplot(1, 2, 2)
ax.plot(t[T:], continuous_corr)
ax.set_xticks(t_ticks)
plt.show()
#### Autocorrelación: Gráfica
#### Las funciones solo se parecen cuando tau es 0
T = 0
## Esta calculando la integral discreta de la función de correlación
discrete_corr = np.cumsum(np.array([ X(t)[i] * X(t)[i+T] for i in range(t.shape[0] - T) ]) * np.mean(t[1:] - t[:-1]))
## No he podido añadirle el ruido a la versión continua
continuous_corr = np.array([ A[i]*A[i + T] * wc**2 * (t[i]**3/3 + (t[i]**2 * T)/2) for i in range(t.shape[0] - T) ])
# discrete_corr = np.array([ X(t)[i] * X(t+T)[i] for i in range(t.shape[0]) ])
# continuous_corr = np.array([ A[i]**2 * ( (t[i]**3)/3 + (t[i]**2 * T)/2 ) for i in range(t.shape[0]) ])
fig = plt.figure(figsize=(15,8))
ax = fig.add_subplot(1, 2, 1)
ax.plot(t[T:], discrete_corr)
ax.set_xticks(t_ticks)
ax = fig.add_subplot(1, 2, 2)
ax.plot(t[T:], continuous_corr)
ax.set_xticks(t_ticks)
plt.show()
#### Autocorrelación suma de la función
MSG = """
Este es el calculo usando la solución analitica: {}
"""
nominator = continuous_corr[-1] - continuous_corr[0]
print(MSG.format(corr))
MSG = """
Este es el calculo usando la solución númerica: {}
"""
nominator = discrete_corr[-1] - discrete_corr[0]
print(MSG.format(nominator))
###Output
Este es el calculo usando la solución analitica: 2098.159417801148
Este es el calculo usando la solución númerica: 2429.9958858799196
|
Handson_intro to python/PY0101EN-2-1-Tuples.ipynb | ###Markdown
Tuples in Python Welcome! This notebook will teach you about the tuples in the Python Programming Language. By the end of this lab, you'll know the basics tuple operations in Python, including indexing, slicing and sorting. Tuples In Python, there are different data types: string, integer and float. These data types can all be contained in a tuple as follows: Now, let us create your first tuple with string, integer and float.
###Code
# Create your first tuple
tuple1 = ("disco",10,1.2 )
tuple1
###Output
_____no_output_____
###Markdown
The type of variable is a **tuple**.
###Code
# Print the type of the tuple you created
type(tuple1)
###Output
_____no_output_____
###Markdown
Indexing Each element of a tuple can be accessed via an index. The following table represents the relationship between the index and the items in the tuple. Each element can be obtained by the name of the tuple followed by a square bracket with the index number: We can print out each value in the tuple:
###Code
# Print the variable on each index
print(tuple1[0])
print(tuple1[1])
print(tuple1[2])
###Output
_____no_output_____
###Markdown
We can print out the **type** of each value in the tuple:
###Code
# Print the type of value on each index
print(type(tuple1[0]))
print(type(tuple1[1]))
print(type(tuple1[2]))
###Output
_____no_output_____
###Markdown
We can also use negative indexing. We use the same table above with corresponding negative values: We can obtain the last element as follows (this time we will not use the print statement to display the values):
###Code
# Use negative index to get the value of the last element
tuple1[-1]
###Output
_____no_output_____
###Markdown
We can display the next two elements as follows:
###Code
# Use negative index to get the value of the second last element
tuple1[-2]
# Use negative index to get the value of the third last element
tuple1[-3]
###Output
_____no_output_____
###Markdown
Concatenate Tuples We can concatenate or combine tuples by using the **+** sign:
###Code
# Concatenate two tuples
tuple2 = tuple1 + ("hard rock", 10)
tuple2
###Output
_____no_output_____
###Markdown
We can slice tuples obtaining multiple values as demonstrated by the figure below: Slicing We can slice tuples, obtaining new tuples with the corresponding elements:
###Code
# Slice from index 0 to index 2
tuple2[0:3]
###Output
_____no_output_____
###Markdown
We can obtain the last two elements of the tuple:
###Code
# Slice from index 3 to index 4
tuple2[3:5]
###Output
_____no_output_____
###Markdown
We can obtain the length of a tuple using the length command:
###Code
# Get the length of tuple
len(tuple2)
###Output
_____no_output_____
###Markdown
This figure shows the number of elements: Sorting Consider the following tuple:
###Code
# A sample tuple
Ratings = (0, 9, 6, 5, 10, 8, 9, 6, 2)
###Output
_____no_output_____
###Markdown
We can sort the values in a tuple and save it to a new tuple:
###Code
# Sort the tuple
RatingsSorted = sorted(Ratings)
RatingsSorted
###Output
_____no_output_____
###Markdown
Nested Tuple A tuple can contain another tuple as well as other more complex data types. This process is called 'nesting'. Consider the following tuple with several elements:
###Code
# Create a nest tuple
NestedT =(1, 2, ("pop", "rock") ,(3,4),("disco",(1,2)))
###Output
_____no_output_____
###Markdown
Each element in the tuple including other tuples can be obtained via an index as shown in the figure:
###Code
# Print element on each index
print("Element 0 of Tuple: ", NestedT[0])
print("Element 1 of Tuple: ", NestedT[1])
print("Element 2 of Tuple: ", NestedT[2])
print("Element 3 of Tuple: ", NestedT[3])
print("Element 4 of Tuple: ", NestedT[4])
###Output
_____no_output_____
###Markdown
We can use the second index to access other tuples as demonstrated in the figure: We can access the nested tuples :
###Code
# Print element on each index, including nest indexes
print("Element 2, 0 of Tuple: ", NestedT[2][0])
print("Element 2, 1 of Tuple: ", NestedT[2][1])
print("Element 3, 0 of Tuple: ", NestedT[3][0])
print("Element 3, 1 of Tuple: ", NestedT[3][1])
print("Element 4, 0 of Tuple: ", NestedT[4][0])
print("Element 4, 1 of Tuple: ", NestedT[4][1])
###Output
_____no_output_____
###Markdown
We can access strings in the second nested tuples using a third index:
###Code
# Print the first element in the second nested tuples
NestedT[2][1][0]
# Print the second element in the second nested tuples
NestedT[2][1][1]
###Output
_____no_output_____
###Markdown
We can use a tree to visualise the process. Each new index corresponds to a deeper level in the tree: Similarly, we can access elements nested deeper in the tree with a fourth index:
###Code
# Print the first element in the second nested tuples
NestedT[4][1][0]
# Print the second element in the second nested tuples
NestedT[4][1][1]
###Output
_____no_output_____
###Markdown
The following figure shows the relationship of the tree and the element NestedT[4][1][1]: Quiz on Tuples Consider the following tuple:
###Code
# sample tuple
genres_tuple = ("pop", "rock", "soul", "hard rock", "soft rock", \
"R&B", "progressive rock", "disco")
genres_tuple
###Output
_____no_output_____
###Markdown
Find the length of the tuple, genres_tuple:
###Code
# Write your code below and press Shift+Enter to execute
###Output
_____no_output_____
###Markdown
Access the element, with respect to index 3:
###Code
# Write your code below and press Shift+Enter to execute
###Output
_____no_output_____
###Markdown
Use slicing to obtain indexes 3, 4 and 5:
###Code
# Write your code below and press Shift+Enter to execute
###Output
_____no_output_____
###Markdown
Find the first two elements of the tuple genres_tuple:
###Code
# Write your code below and press Shift+Enter to execute
###Output
_____no_output_____
###Markdown
Find the first index of "disco":
###Code
# Write your code below and press Shift+Enter to execute
###Output
_____no_output_____
###Markdown
Generate a sorted List from the Tuple C_tuple=(-5, 1, -3):
###Code
# Write your code below and press Shift+Enter to execute
###Output
_____no_output_____ |
week07_compression/train_and_export.ipynb | ###Markdown
__this notebook__ trains a small LSTM language model and showcases its predicitons in javascript
###Code
%env CUDA_VISIBLE_DEVICES=0,1,2,3
!pip show numpy tensorflow subword_nmt nltk prefetch_generator tensorflowjs | grep -A1 Name
# note: we *need* tf2.2+, the code doesn't work on tf1.x
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers as L
import nltk
import pandas as pd
import subword_nmt.learn_bpe, subword_nmt.apply_bpe
from tqdm import tqdm
from IPython.display import clear_output
from prefetch_generator import background # pip install prefetch_generator
import matplotlib.pyplot as plt
%matplotlib inline
###Output
env: CUDA_VISIBLE_DEVICES=0,1,2,3
Name: numpy
Version: 1.18.2
--
Name: tensorflow
Version: 2.2.0
--
Name: subword-nmt
Version: 0.3.7
--
Name: nltk
Version: 3.5
--
Name: prefetch-generator
Version: 1.0.1
--
Name: tensorflowjs
Version: 2.1.0
###Markdown
Read the dataWe're gonna train a model on arxiv papers based on [this dataset](https://www.kaggle.com/neelshah18/arxivdataset). We'll use the version of this dataset from [Yandex NLP course](https://github.com/yandexdataschool/nlp_course)
###Code
# Alternative manual download link: https://yadi.sk/d/_nGyU2IajjR9-w
!wget "https://www.dropbox.com/s/99az9n1b57qkd9j/arxivData.json.tar.gz?dl=1" -O arxivData.json.tar.gz
!tar -xvzf arxivData.json.tar.gz
data = pd.read_json("./arxivData.json")
lines = data.apply(lambda row: row['title'] + ' ; ' + row['summary'], axis=1).tolist()
tokenizer = nltk.tokenize.WordPunctTokenizer()
lines = [' '.join(line).lower() for line in tokenizer.tokenize_sents(lines)]
with open('lines.tok', 'w') as f:
for line in lines:
f.write(line + '\n')
with open('lines.tok', 'r') as f_lines_tok, open('bpe_rules', 'w') as f_bpe:
subword_nmt.learn_bpe.learn_bpe(f_lines_tok, f_bpe, num_symbols=4000)
with open('bpe_rules', 'r') as f_bpe:
bpeizer = subword_nmt.apply_bpe.BPE(f_bpe)
lines = list(map(' '.join, map(bpeizer.segment_tokens, map(str.split, lines))))
print(lines[0])
num_tokens_per_line = list(map(len, map(str.split, lines)))
max_len = int(np.percentile(num_tokens_per_line, 90))
plt.hist(num_tokens_per_line, bins=20);
print("90-th percentile:", max_len)
###Output
90-th percentile: 333
###Markdown
VocabularyLet's define a special class that converts between text lines and tf tensors
###Code
import nltk
import json
from collections import Counter
class Vocab:
def __init__(self, tokens, bos="_BOS_", eos="_EOS_", unk='_UNK_'):
"""
A special class that converts lines of tokens into matrices and backwards
source: https://github.com/yandexdataschool/nlp_course/blob/2019/week04_seq2seq/utils.py
"""
assert all(tok in tokens for tok in (bos, eos, unk))
self.tokens = tokens
self.token_to_ix = {t:i for i, t in enumerate(tokens)}
self.bos, self.eos, self.unk = bos, eos, unk
self.bos_ix = self.token_to_ix[bos]
self.eos_ix = self.token_to_ix[eos]
self.unk_ix = self.token_to_ix[unk]
def __len__(self):
return len(self.tokens)
@classmethod
def from_data(cls, lines, max_tokens=None, bos="_BOS_", eos="_EOS_", unk='_UNK_'):
flat_lines = '\n'.join(list(lines)).split()
tokens, counts = zip(*Counter(flat_lines).most_common(max_tokens))
tokens = [bos, eos, unk] + [t for t in sorted(tokens) if t not in (bos, eos, unk)]
return cls(tokens, bos, eos, unk)
def save(self, path):
with open(path, 'w') as f:
json.dump((self.tokens, self.bos, self.eos, self.unk), f)
@classmethod
def load(cls, path):
with open(path, 'r') as f:
return cls(*json.load(f))
def tokenize(self, string):
"""converts string to a list of tokens"""
tokens = [tok if tok in self.token_to_ix else self.unk for tok in string.split()]
return [self.bos] + tokens + [self.eos]
def to_matrix(self, lines, max_len=None):
"""
convert variable length token sequences into fixed size matrix
example usage:
>>>print( as_matrix(words[:3],source_to_ix))
[[15 22 21 28 27 13 -1 -1 -1 -1 -1]
[30 21 15 15 21 14 28 27 13 -1 -1]
[25 37 31 34 21 20 37 21 28 19 13]]
"""
lines = list(map(self.tokenize, lines))
max_len = max_len or max(map(len, lines))
matrix = np.full((len(lines), max_len), self.eos_ix, dtype='int32')
for i, seq in enumerate(lines):
row_ix = list(map(self.token_to_ix.get, seq))[:max_len]
matrix[i, :len(row_ix)] = row_ix
return tf.convert_to_tensor(matrix)
def to_lines(self, matrix, crop=True):
"""
Convert matrix of token ids into strings
:param matrix: matrix of tokens of int32, shape=[batch,time]
:param crop: if True, crops BOS and EOS from line
:return:
"""
lines = []
for line_ix in map(list,matrix):
if crop:
if line_ix[0] == self.bos_ix:
line_ix = line_ix[1:]
if self.eos_ix in line_ix:
line_ix = line_ix[:line_ix.index(self.eos_ix)]
line = ' '.join(self.tokens[i] for i in line_ix)
lines.append(line)
return lines
def infer_length(self, batch_ix: tf.Tensor, dtype=tf.int32):
""" compute length given output indices, return int32 vector [len(batch_ix)] """
is_eos = tf.cast(tf.equal(batch_ix, self.eos_ix), dtype)
count_eos = tf.cumsum(is_eos, axis=1, exclusive=True)
lengths = tf.reduce_sum(tf.cast(tf.equal(count_eos, 0), dtype), axis=1)
return lengths
def infer_mask(self, batch_ix: tf.Tensor, dtype=tf.bool):
""" all tokens after (but not including) first EOS are masked out """
lengths = self.infer_length(batch_ix)
return tf.sequence_mask(lengths, maxlen=tf.shape(batch_ix)[1], dtype=dtype)
voc = Vocab.from_data(lines)
voc.to_matrix(lines[:2])
voc.to_lines(voc.to_matrix(lines[:3])[:, :15])
###Output
_____no_output_____
###Markdown
Model & trainingNow let as train a simple LSTM language model the pre-processed data.__Note:__ we don't use validation for simplicity's sake, meaning our model probably overfits like crazy. But who cares? its a demo!
###Code
class LanguageModel(keras.models.Model):
def __init__(self, voc, emb_size=128, hid_size=1024):
super().__init__()
self.voc = voc
self.emb = L.Embedding(len(voc), emb_size)
self.lstm = L.LSTM(hid_size, return_sequences=True, return_state=True)
self.logits = L.Dense(len(voc))
def call(self, batch_ix):
hid_seq, last_hid, last_cell = self.lstm(self.emb(batch_ix[:, :-1]))
logits = self.logits(hid_seq)
mask = self.voc.infer_mask(batch_ix, dtype=tf.float32)
loss_values = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=tf.reshape(logits, [-1, logits.shape[-1]]),
labels=tf.reshape(batch_ix[:, 1:], [-1])
)
mean_loss = tf.reduce_sum(loss_values * tf.reshape(mask[:, 1:], tf.shape(loss_values))) \
/ tf.reduce_sum(mask[:, 1:])
return mean_loss
def iterate_minibatches(lines, batch_size, cycle=True, **kwargs):
while True:
lines_shuf = [lines[i] for i in np.random.permutation(len(lines))]
for batch_start in range(0, len(lines_shuf), batch_size):
yield (voc.to_matrix(lines_shuf[batch_start: batch_start + batch_size], **kwargs),) * 2
if not cycle:
break
with tf.distribute.MirroredStrategy().scope() as scope:
print('Number of devices: {}'.format(scope.num_replicas_in_sync))
model = LanguageModel(voc)
model.compile(optimizer='adam', loss=lambda _, loss: tf.reduce_mean(loss))
import glob
checkpoint_path = './checkpoints/lstm1024_emb128_bpe4000_batch256'
if glob.glob(checkpoint_path + '*'):
print("Loading pre-trained model.")
model.load_weights(checkpoint_path)
else:
print("Training from scratch")
model.fit(iterate_minibatches(lines, batch_size=256, max_len=max_len),
epochs=100, steps_per_epoch=256,
callbacks=[keras.callbacks.ModelCheckpoint(checkpoint_path, monitor='loss')])
###Output
Loading pre-trained model.
###Markdown
Make JS-compatible language model applier
###Code
# custom keras model that
# * applies a single step of LSTM
# * uses pure keras, no custom python code
l_prev_tokens = L.Input([None], dtype='int32')
l_prev_hid = L.Input([model.lstm.units], dtype='float32', name='previous_lstm_hid')
l_prev_cell = L.Input([model.lstm.units], dtype='float32', name='previous_lstm_cell')
l_prev_emb = model.emb(l_prev_tokens) # [batch, emb_size]
_, l_new_hid, l_new_cell = model.lstm(l_prev_emb, initial_state=[l_prev_hid, l_prev_cell])
l_new_logits = model.logits(l_new_hid)
model_step = keras.models.Model([l_prev_tokens, l_prev_hid, l_prev_cell],
[l_new_logits, l_new_hid, l_new_cell])
tfjs.converters.save_keras_model(model_step, './lm')
# test model step from python
h = c = tf.ones([1, model.lstm.units])
model_step((tf.convert_to_tensor([[3]], dtype='int32'), h, c))
# save bpe and vocabulary
with open('./frontend/voc.json', 'w') as f:
packed_bpe_rules = list(map(list, sorted(bpeizer.bpe_codes.keys(), key=bpeizer.bpe_codes.get)))
json.dump([model.lstm.units, model.emb.output_dim, model.logits.units,
packed_bpe_rules, voc.tokens, voc.bos, voc.eos, voc.unk], f)
voc.to_matrix(['deep neural'])[:, :-1].shape
###Output
_____no_output_____ |
2019_03_17/PyTorch_Classify.ipynb | ###Markdown
样例数据的生成
###Code
# 生成样例数据
noisy_moons, labels = datasets.make_moons(n_samples=1000, noise=.05, random_state=10) # 生成 1000 个样本并添加噪声
# 其中800个作为训练数据,200个作为测试数据
X_train,Y_train,X_test,Y_test = noisy_moons[:-200],labels[:-200],noisy_moons[-200:],labels[-200:]
print(len(X_train),len(Y_train),len(X_test),len(Y_test))
plt.figure(figsize=(8,6))
plt.scatter(X_test[:,0],X_test[:,1],c=Y_test)
###Output
_____no_output_____
###Markdown
网络的构建
###Code
import torch as t
from torch import nn
from torch import optim
from torch.autograd import Variable
import torch.utils.data as Data
from IPython import display
# 网络的构建
class classifer(nn.Module):
def __init__(self):
super(classifer, self).__init__()
self.class_col = nn.Sequential(
nn.Linear(2,16),
nn.ReLU(),
nn.Linear(16,32),
nn.ReLU(),
nn.Linear(32,32),
nn.ReLU(),
nn.Linear(32,32),
nn.ReLU(),
nn.Linear(32,2),
)
def forward(self, x):
out = self.class_col(x)
return out
# ----------------
# 定义优化器及损失函数
# ----------------
from torch import optim
model = classifer() # 实例化模型
loss_fn = nn.CrossEntropyLoss() # 定义损失函数
optimiser = optim.SGD(params=model.parameters(), lr=0.05) # 定义优化器
# net
print(model)
###Output
classifer(
(class_col): Sequential(
(0): Linear(in_features=2, out_features=16, bias=True)
(1): ReLU()
(2): Linear(in_features=16, out_features=32, bias=True)
(3): ReLU()
(4): Linear(in_features=32, out_features=32, bias=True)
(5): ReLU()
(6): Linear(in_features=32, out_features=32, bias=True)
(7): ReLU()
(8): Linear(in_features=32, out_features=2, bias=True)
)
)
###Markdown
定义变量
###Code
# ------
# 定义变量
# ------
from torch.autograd import Variable
import torch.utils.data as Data
X_train = t.Tensor(X_train) # 输入 x 张量
X_test = t.Tensor(X_test)
Y_train = t.Tensor(Y_train).long() # 输入 y 张量
Y_test = t.Tensor(Y_test).long()
# 使用batch训练
torch_dataset = Data.TensorDataset(X_train, Y_train) # 合并训练数据和目标数据
MINIBATCH_SIZE = 25
loader = Data.DataLoader(
dataset=torch_dataset,
batch_size=MINIBATCH_SIZE,
shuffle=True,
num_workers=2 # set multi-work num read data
)
###Output
_____no_output_____
###Markdown
进行训练
###Code
# ---------
# 进行训练
# ---------
loss_list = []
for epoch in range(200):
for step, (batch_x, batch_y) in enumerate(loader):
batch_x = Variable(batch_x)
batch_y = Variable(batch_y)
optimiser.zero_grad() # 梯度清零
out = model(batch_x) # 前向传播
loss = loss_fn(out, batch_y) # 计算损失
loss.backward() # 反向传播
optimiser.step() # 随机梯度下降
loss_list.append(loss)
# 下面都是绘图的代码, 可以不看, 记录loss即可
if epoch%10==0:
outputs = model(X_test)
_, predicted = t.max(outputs, 1)
display.clear_output(wait=True)
plt.style.use('ggplot')
plt.figure(figsize=(12, 8))
plt.scatter(X_test[:,0].numpy(),X_test[:,1].numpy(),c=predicted)
plt.title("epoch: {}, loss:{}".format(epoch+1, loss))
plt.show()
plt.figure(figsize=(8,6))
plt.plot(loss_list)
test = t.tensor([2.0, 3.0])
test = Variable(test,requires_grad=True)
def f(x):
2*x
f(test)
test.grad()
###Output
_____no_output_____ |
codes/labs_lecture13/lecture13_point_cloud_classification_exercise.ipynb | ###Markdown
Lab 01 : Point Cloud Classification - exerciseThe goal is to implement an architecture that classifies point clouds.
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
path_to_file = '/content/gdrive/My Drive/CS4243_codes/codes/labs_lecture13'
print(path_to_file)
# move to Google Drive directory
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import torch.optim as optim
import utils
import time
#device= torch.device("cuda")
device= torch.device("cpu")
print(device)
###Output
_____no_output_____
###Markdown
Generate the dataset
###Code
# Libraries
from torchvision.transforms import ToTensor
from PIL import Image
import matplotlib.pyplot as plt
import logging
logging.getLogger().setLevel(logging.CRITICAL) # remove warnings
# Import 5 object types
ob_size = 11
objects = torch.zeros(5,ob_size,ob_size)
nb_class_objects = 5
for k in range(nb_class_objects):
objects[k,:,:] = 1-ToTensor()(Image.open('objects/obj'+str(k+1)+'.tif'))[0,:,:]
print(objects.size())
# Define the bounding boxes w.r.t. object type
def obj_legend(label):
if label==0:
color = 'r'; legend = 'Triangle'
elif label==1:
color = 'b'; legend = 'Cross'
elif label==2:
color = 'g'; legend = 'Star'
elif label==3:
color = 'y'; legend = 'Square'
elif label==4:
color = 'm'; legend = 'Ring'
return color, legend
# Global constants
# im_size = image size
# ob_size = object size
# batch_size = batch size
# nb_object_classes = number of object classes (we have 5 classes)
im_size = 28
batch_size = 2
nb_object_classes = 5
nb_points = 35 # min=41 max=66
# Function that generate a batch of training data
def generate_batch_data(im_size, ob_size, batch_size, nb_points, nb_object_classes):
batch_images = torch.zeros(batch_size,im_size,im_size)
batch_points = torch.zeros(batch_size,nb_points,2)
batch_labels = torch.zeros(batch_size)
for b in range(batch_size):
image = torch.zeros(im_size,im_size)
class_object = torch.LongTensor(1).random_(0,nb_object_classes)
offset = (ob_size-1)// 2 + 0
coord_objects = torch.LongTensor(2).random_(offset,im_size-offset)
# coord_objects[0] = x-coordinate, coord_objects[1] = y-coordinate
image[coord_objects[1]-offset:coord_objects[1]-offset+ob_size,coord_objects[0]-offset:coord_objects[0]-offset+ob_size] = objects[class_object,:,:]
# find x,y s.t. image[y,x]=0.5
obj_yx = torch.Tensor(plt.contour(image, [0.5]).collections[0].get_paths()[0].vertices); plt.clf()
obj_yx[:,[0,1]] = obj_yx[:,[1,0]]
if class_object==4: # get the interior for the ring shape
obj_yx_tmp = torch.Tensor(plt.contour(image, [0.5]).collections[0].get_paths()[1].vertices); plt.clf()
obj_yx_tmp[:,[0,1]] = obj_yx_tmp[:,[1,0]]
obj_yx = torch.cat((obj_yx,obj_yx_tmp),dim=0)
nb_yx_pts = obj_yx.size(0)
if nb_yx_pts>=nb_points:
idx_perm = torch.randperm(nb_yx_pts)[:nb_points]
obj_yx = obj_yx[idx_perm,:]
else: # in case of plt.contour does not extract enough data points
obj_yx = obj_yx.repeat_interleave(nb_points//nb_yx_pts+1,dim=0)[:nb_points]
batch_images[b,:,:] = image
batch_points[b,:,:] = obj_yx
batch_labels[b] = class_object
return batch_images, batch_points, batch_labels
# Plot a mini-batch of images
batch_images, batch_points, batch_labels = generate_batch_data(im_size, ob_size, batch_size, nb_points, nb_object_classes)
print(batch_images.size())
print(batch_points.size())
print(batch_labels.size())
for b in range(batch_size):
#plt.imshow(batch_images[b,:,:], cmap='gray')
plt.imshow(torch.zeros(im_size,im_size), cmap='gray')
color, legend = obj_legend(batch_labels[b])
plt.scatter(batch_points[b,:,1],batch_points[b,:,0],marker='+',color=color,label=legend)
plt.legend(loc='best')
plt.colorbar()
plt.title('Point Cloud')
#plt.axis('off')
plt.show()
# Define the cloud network architecture
batch_size = 2 # for debug
class cloudNN(nn.Module):
def __init__(self):
super(cloudNN, self).__init__()
hidden_dim = 250
# first set layer
# COMPLETE HERE
# second set layer
# COMPLETE HERE
# classification layer
# COMPLETE HERE
def forward(self, x):
# first set layer
# COMPLETE HERE
# second set layer
# COMPLETE HERE
# classification layer
# COMPLETE HERE
return scores_cloud_class
# Instantiate the network
net = cloudNN()
net = net.to(device)
print(net)
utils.display_num_param(net)
# Test the forward pass, backward pass and gradient update with a single batch
init_lr = 0.001
optimizer = torch.optim.Adam(net.parameters(), lr=init_lr)
batch_images, batch_points, batch_labels = generate_batch_data(im_size, ob_size, batch_size, nb_points, nb_object_classes)
optimizer.zero_grad()
scores_cloud_class = net(batch_points) # [batch_size, nb_object_classes] = [2, 5]
batch_labels = batch_labels.long() # [batch_size] = [2]
# loss
loss = nn.CrossEntropyLoss()(scores_cloud_class, batch_labels)
loss.backward()
optimizer.step()
# Training loop
net = cloudNN()
net = net.to(device)
utils.display_num_param(net)
# Optimizer
init_lr = 0.001
optimizer = torch.optim.Adam(net.parameters(), lr=init_lr)
# Number of mini-batches per epoch
nb_batch = 10
batch_size = 10
start=time.time()
for epoch in range(10):
running_loss = 0.0
num_batches = 0
for _ in range(nb_batch):
# FORWARD AND BACKWARD PASS
batch_images, batch_points, batch_labels = generate_batch_data(im_size, ob_size, batch_size, nb_points, nb_object_classes)
optimizer.zero_grad()
scores_cloud_class = net(batch_points) # [batch_size, nb_object_classes] = [2, 5]
batch_labels = batch_labels.long() # [batch_size] = [2]
# loss
loss = nn.CrossEntropyLoss()(scores_cloud_class, batch_labels)
loss.backward()
optimizer.step()
# COMPUTE STATS
running_loss += loss.detach().item()
num_batches += 1
# AVERAGE STATS THEN DISPLAY
total_loss = running_loss/num_batches
elapsed = (time.time()-start)/60
print('epoch=',epoch, '\t time=', elapsed,'min', '\t lr=', init_lr ,'\t loss=', total_loss )
# Test time
# select a batch of 2 images
batch_size = 5
# generate the batch of 2 images
batch_images, batch_points, batch_labels = generate_batch_data(im_size, ob_size, batch_size, nb_points, nb_object_classes)
# forward pass
scores_cloud_class = net(batch_points) # [batch_size, nb_object_classes]
# class prediction
pred_cloud_class = torch.argmax(scores_cloud_class, dim=1) # [batch_size]
# Plot the ground truth solution and the predicted solution
for b in range(batch_size):
#plt.imshow(batch_images[b,:,:], cmap='gray')
plt.imshow(torch.zeros(im_size,im_size), cmap='gray')
color, legend = obj_legend(batch_labels[b])
plt.scatter(batch_points[b,:,1],batch_points[b,:,0],marker='+',color=color,label=legend)
plt.legend(loc='best')
plt.colorbar()
plt.title('Ground Truth')
plt.show()
#plt.imshow(batch_images[b,:,:], cmap='gray')
plt.imshow(torch.zeros(im_size,im_size), cmap='gray')
color, legend = obj_legend(pred_cloud_class[b])
plt.scatter(batch_points[b,:,1],batch_points[b,:,0],marker='+',color=color,label=legend)
plt.legend(loc='best')
plt.colorbar()
plt.title('Prediction')
plt.show()
###Output
_____no_output_____ |
ism415-Copy1.ipynb | ###Markdown
Introduction to Data Science with python
###Code
import pandas #to import panda directory
dir(pandas) #to open panda directory
data = pandas.read_csv('IMDB-Movie-Data.csv', index_col='Title')
data
text1 = "Ethics are built right into the ideals and objectives of the United Nations"
text1
len(text1) #the length of the text
text2 = text1.split(' ') #return a list of the words in text, seperating by '...'.
len(text2)
text2
[w for w in text2 if len(w) > 3] # words that are greater than 3 letters long in text 2
[w for w in text2 if w == 'the'] # to find a particular word
[w for w in text2 if w == "United" or w =="Nations"] # to find specific words
[w for w in text2 if w.istitle()] #capitalized words
[w for w in text2 if w.endswith('s')] # words that end with a specific letter s
[w for w in text2 if w.()] uncapitalized words
text3 = 'To be or not to be'
text4 = text3.split(' ')
text4
len(text4)
len(set(text4))
len(set([w.lower() for w in text4])) # to find the length of the kowercase letters
set([w.upper() for w in text4])# to make the letters uppercase
text5 = 'ouagadougou'
text6 = text5.split('ou')
text6
'ou'.join(text6)
text7 ='kingk'
text8 = text7.split('k')
text8
'k'.join(text8)
text9= 'A quick brown fox jumped over the lazy dog'
text9
text10 = text9.strip()
text10
text10.split(' ')
file =open('UNDHR')
###Output
_____no_output_____ |
examples/attendance/Entity Resolution on Organizations.ipynb | ###Markdown
Start by getting the attendence information for IETF.
###Code
attendance106 = ia.attendance_tables(106)
###Output
(1639, 6)
###Markdown
What organizations are best represented?
###Code
attendance106.groupby('Organization') \
.count()['First Name'] \
.sort_values(ascending=False)[:30]
###Output
_____no_output_____
###Markdown
Even in this short list, there are repeat names. We need to apply entity resolution.
###Code
attendance106['Organization'].dropna().unique().shape
###Output
_____no_output_____
###Markdown
This is too many names! It will overwhelm the entity resolver. Let's use a subset of the most relevant entities.
###Code
N = 250
topN = attendance106.groupby('Organization')\
.count()['First Name']\
.sort_values(ascending=False)[:N]
distance_matrix = process.matricize(topN.index,
process.containment_distance) \
.replace(to_replace=float('inf'), value= 100)
plt.pcolor(distance_matrix)
plt.colorbar()
ents = process.resolve_entities(topN,
process.containment_distance,
threshold=.25)
replacements = {}
for r in [{name: ent for name in ents[ent]} for ent in ents]:
replacements.update(r)
attendance106_clean = attendance106.replace(to_replace=replacements)
attendance106_clean.groupby('Organization') \
.size() \
.sort_values(ascending=False)[:30]
###Output
_____no_output_____
###Markdown
Start by getting the attendence information for IETF.
###Code
attendance106 = ia.attendance_tables(106)
###Output
(1639, 6)
###Markdown
What organizations are best represented?
###Code
attendance106.groupby('Organization') \
.count()['First Name'] \
.sort_values(ascending=False)[:30]
###Output
_____no_output_____
###Markdown
Even in this short list, there are repeat names. We need to apply entity resolution.
###Code
attendance106['Organization'].dropna().unique().shape
###Output
_____no_output_____
###Markdown
This is too many names! It will overwhelm the entity resolver. Let's use a subset of the most relevant entities.
###Code
N = 250
topN = attendance106.groupby('Organization')\
.count()['First Name']\
.sort_values(ascending=False)[:N]
distance_matrix = process.matricize(topN.index,
process.containment_distance) \
.replace(to_replace=float('inf'), value= 100)
plt.pcolor(distance_matrix)
plt.colorbar()
ents = process.resolve_entities(topN,
process.containment_distance,
threshold=.25)
replacements = {}
for r in [{name: ent for name in ents[ent]} for ent in ents]:
replacements.update(r)
attendance106_clean = attendance106.replace(to_replace=replacements)
attendance106_clean.groupby('Organization') \
.size() \
.sort_values(ascending=False)[:30]
###Output
_____no_output_____ |
4. Neural Networks/1. Predicting Student Admissions - Neural Network.ipynb | ###Markdown
Predicting Student Admissions with Neural NetworksIn this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:- GRE Scores (Test)- GPA Scores (Grades)- Class rank (1-4)The dataset originally came from here: http://www.ats.ucla.edu/ Loading the dataTo load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:- https://pandas.pydata.org/pandas-docs/stable/- https://docs.scipy.org/
###Code
# Importing pandas and numpy
import pandas as pd
import numpy as np
# Reading the csv file into a pandas DataFrame
data = pd.read_csv('student_data.csv')
# Printing out the first 10 rows of our data
data[:10]
###Output
_____no_output_____
###Markdown
Plotting the dataFirst let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.
###Code
# Importing matplotlib
import matplotlib.pyplot as plt
# Function to help us plot
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
# Plotting the points
plot_points(data)
plt.show()
###Output
_____no_output_____
###Markdown
Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.
###Code
# Separating the ranks
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
# Plotting the graphs
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show()
###Output
_____no_output_____
###Markdown
This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it. TODO: One-hot encoding the rankUse the `get_dummies` function in Pandas in order to one-hot encode the data.
###Code
# TODO: Make dummy variables for rank
one_hot_data = pd.concat([data, pd.get_dummies(data['rank'], prefix='rank')], axis=1)
# TODO: Drop the previous rank column
one_hot_data = one_hot_data.drop(columns="rank")
# Print the first 10 rows of our data
one_hot_data[:10]
###Output
_____no_output_____
###Markdown
TODO: Scaling the dataThe next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.
###Code
# Making a copy of our data
processed_data = one_hot_data[:]
# TODO: Scale the columns
processed_data["gre"] = processed_data["gre"]/800
processed_data["gpa"] = processed_data["gpa"]/4.0
# Printing the first 10 rows of our procesed data
processed_data[:10]
###Output
_____no_output_____
###Markdown
Splitting the data into Training and Testing In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.
###Code
sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)
train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)
print("Number of training samples is", len(train_data))
print("Number of testing samples is", len(test_data))
print(train_data[:10])
print(test_data[:10])
###Output
Number of training samples is 360
Number of testing samples is 40
admit gre gpa rank_1 rank_2 rank_3 rank_4
36 0 0.725 0.8125 1 0 0 0
93 0 0.725 0.7325 0 1 0 0
387 0 0.725 0.8400 0 1 0 0
370 1 0.675 0.9425 0 1 0 0
229 1 0.900 0.8550 0 1 0 0
312 0 0.825 0.9425 0 0 1 0
195 0 0.700 0.8975 0 1 0 0
88 0 0.875 0.8200 1 0 0 0
323 0 0.525 0.6725 0 1 0 0
330 0 0.925 1.0000 0 0 1 0
admit gre gpa rank_1 rank_2 rank_3 rank_4
9 0 0.875 0.9800 0 1 0 0
32 0 0.750 0.8500 0 0 1 0
33 1 1.000 1.0000 0 0 1 0
42 1 0.750 0.7875 0 1 0 0
52 0 0.925 0.8425 0 0 0 1
56 0 0.700 0.7975 0 0 1 0
57 0 0.475 0.7350 0 0 1 0
59 0 0.750 0.7050 0 0 0 1
61 0 0.700 0.8300 0 0 0 1
72 0 0.600 0.8475 0 0 0 1
###Markdown
Splitting the data into features and targets (labels)Now, as a final step before the training, we'll split the data into features (X) and targets (y).
###Code
features = train_data.drop('admit', axis=1)
targets = train_data['admit']
features_test = test_data.drop('admit', axis=1)
targets_test = test_data['admit']
print(features[:10])
print(targets[:10])
###Output
gre gpa rank_1 rank_2 rank_3 rank_4
36 0.725 0.8125 1 0 0 0
93 0.725 0.7325 0 1 0 0
387 0.725 0.8400 0 1 0 0
370 0.675 0.9425 0 1 0 0
229 0.900 0.8550 0 1 0 0
312 0.825 0.9425 0 0 1 0
195 0.700 0.8975 0 1 0 0
88 0.875 0.8200 1 0 0 0
323 0.525 0.6725 0 1 0 0
330 0.925 1.0000 0 0 1 0
36 0
93 0
387 0
370 1
229 1
312 0
195 0
88 0
323 0
330 0
Name: admit, dtype: int64
###Markdown
Training the 2-layer Neural NetworkThe following function trains the 2-layer neural network. First, we'll write some helper functions.
###Code
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x) * (1-sigmoid(x))
def error_formula(y, output):
return - y*np.log(output) - (1 - y) * np.log(1-output)
###Output
_____no_output_____
###Markdown
TODO: Backpropagate the errorNow it's your turn to shine. Write the error term. Remember that this is given by the equation $$ (y-\hat{y}) \sigma'(x) $$
###Code
# TODO: Write the error term formula
def error_term_formula(x, y, output):
return (y- output) * sigmoid_prime(x)
# Neural Network hyperparameters
epochs = 1000
learnrate = 0.5
# Training function
def train_nn(features, targets, epochs, learnrate):
# Use to same seed to make debugging easier
np.random.seed(42)
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features.values, targets):
# Loop through all records, x is the input, y is the target
# Activation of the output unit
# Notice we multiply the inputs and the weights here
# rather than storing h as a separate variable
output = sigmoid(np.dot(x, weights))
# The error, the target minus the network output
error = error_formula(y, output)
# The error term
error_term = error_term_formula(x, y, output)
# The gradient descent step, the error times the gradient times the inputs
del_w += error_term * x
# Update the weights here. The learning rate times the
# change in weights, divided by the number of records to average
weights += learnrate * del_w / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
out = sigmoid(np.dot(features, weights))
loss = np.mean((out - targets) ** 2)
print("Epoch:", e)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
print("=========")
print("Finished training!")
return weights
weights = train_nn(features, targets, epochs, learnrate)
###Output
Epoch: 0
Train loss: 0.274206433965
=========
Epoch: 100
Train loss: 0.211111862409
=========
Epoch: 200
Train loss: 0.208386980268
=========
Epoch: 300
Train loss: 0.207059290442
=========
Epoch: 400
Train loss: 0.206372980342
=========
Epoch: 500
Train loss: 0.205980961861
=========
Epoch: 600
Train loss: 0.205727687535
=========
Epoch: 700
Train loss: 0.205542210719
=========
Epoch: 800
Train loss: 0.205391263813
=========
Epoch: 900
Train loss: 0.205258798724
=========
Finished training!
###Markdown
Calculating the Accuracy on the Test Data
###Code
# Calculate accuracy on test data
test_out = sigmoid(np.dot(features_test, weights))
predictions = test_out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
###Output
Prediction accuracy: 0.675
|
notebooks/Fig1A.ipynb | ###Markdown
The following lines need to be used if data the from downloaded dataset should be used. The location of the ``Data`` folder needs to be specified by the parameter ``DATA_FOLDER_PATH`` in the file ``input_params.json``. If you want to analyse your own dataset you need to set the variable ``file_path`` to the folder where the simulation is located. Importantly, in this folder there should only be located exactly one simulation.
###Code
file_path_input_params_json = '../input_params.json'
input_param_dict = mainClass.extract_variables_from_input_params_json(file_path_input_params_json)
root_path = input_param_dict["DATA_FOLDER_PATH"]
simulation_location = 'fig_1'
file_path = os.path.join(root_path, simulation_location)
print('file_path', file_path)
parameter_path = os.path.join(file_path, 'parameter_set.csv')
print('parameter_path', parameter_path)
###Output
file_path /home/berger/Documents/Arbeit/PhD/data/UltrasensitivityCombined/NatCom/fig_1
parameter_path /home/berger/Documents/Arbeit/PhD/data/UltrasensitivityCombined/NatCom/fig_1/parameter_set.csv
###Markdown
Make data frame from time traces
###Code
data_frame = makeDataframe.make_dataframe(file_path)
time_traces_data_frame = pd.read_hdf(data_frame['path_dataset'].iloc[0], key='dataset_time_traces')
v_init_data_frame = pd.read_hdf(data_frame['path_dataset'].iloc[0], key='dataset_init_events')
v_init = v_init_data_frame.iloc[0]['v_init']
v_init
n_ori = np.array(time_traces_data_frame["n_ori"])
time = np.array(time_traces_data_frame["time"])
volume = np.array(time_traces_data_frame["volume"])
n_ori_density = n_ori / volume
###Output
_____no_output_____
###Markdown
Color definitions
###Code
pinkish_red = (247 / 255, 109 / 255, 109 / 255)
green = (0 / 255, 133 / 255, 86 / 255)
dark_blue = (36 / 255, 49 / 255, 94 / 255)
light_blue = (168 / 255, 209 / 255, 231 / 255)
blue = (55 / 255, 71 / 255, 133 / 255)
yellow = (247 / 255, 233 / 255, 160 / 255)
###Output
_____no_output_____
###Markdown
Plot three figures
###Code
label_list = [r'$V(t)$', r'$n_{ori}(t)$', r'$\rho_{ori}(t)$']
x_axes_list = [time, time, time]
y_axes_list = [volume, n_ori, n_ori_density]
color_list = [green, dark_blue, pinkish_red]
fig, ax = plt.subplots(3, figsize=(6,3))
plt.xlabel('time')
for item in range(0, len(label_list)):
ax[item].set_ylabel(label_list[item])
ax[item].plot(x_axes_list[item], y_axes_list[item], color=color_list[item])
ax[item].set_ylim(ymin=0)
ax[item].tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False) # labels along the bottom edge are off
ax[item].spines["top"].set_visible(False)
ax[item].spines["right"].set_visible(False)
ax[item].margins(0)
ax[0].set_yticks([v_init, 2*v_init])
ax[0].set_yticklabels([ r'$v^\ast$', r'$2 \, v^\ast$'])
ax[0].tick_params(axis='y', colors=green)
ax[0].axhline(y=v_init, color=green, linestyle='--')
ax[0].axhline(y=2*v_init, color=green, linestyle='--')
ax[1].set_yticks([2, 4])
ax[1].set_yticklabels([ r'2', r'4'])
ax[2].axhline(y=1/v_init, color=pinkish_red, linestyle='--')
ax[2].set_yticks([1/v_init])
ax[2].set_yticklabels([ r'$\rho^\ast$'])
ax[2].tick_params(axis='y', colors=pinkish_red)
plt.savefig(file_path + '/fig1_higher.pdf', format='pdf')
###Output
_____no_output_____ |
_notebooks/2021-10-11-dive_into_xml.ipynb | ###Markdown
Dive into XML This notebook is the result of reading through [chapter 12 "XML"](https://diveintopython3.net/xml.html) of Mark Pilgrim's "Dive into Python".This notebook as the original Book is licensed under theCreative Commons Attribution Share-Alike license ([CC-BY-SA-3.0](https://creativecommons.org/licenses/by-sa/3.0/)). ElementTree`ElementTree` from the standard library and `lxml` are the most prevalent tools in the Python world for processing XML. The ElementTree library is part of the Python standard library
###Code
import xml.etree.ElementTree as etree
###Output
_____no_output_____
###Markdown
The primary entry point for the ElementTree library is the `parse()` function, which can take a filename or a file-like object. This function parses the entire document at once. If memory is tight, there are ways to parse an XML document incrementally instead.
###Code
tree = etree.parse('feed.xml')
###Output
_____no_output_____
###Markdown
The `parse()` function returns an object which represents the entire document. This is not the root element. To get a reference to the root element, call the `getroot()` method.
###Code
root = tree.getroot()
###Output
_____no_output_____
###Markdown
In the example file `feed.xml` the root element is the feed element in the http://www.w3.org/2005/Atom namespace. The string representation of this object reinforces an important point: an XML element is a combination of its namespace and its tag name (also called the local name). Every element in this document is in the Atom namespace, so the root element is represented as {http://www.w3.org/2005/Atom}feed.
###Code
root
###Output
_____no_output_____
###Markdown
*ElementTree represents XML elements as `{namespace}localname`. You’ll see and use this format in multiple places in the ElementTree `API`.*
###Code
root.tag
###Output
_____no_output_____
###Markdown
The “length” of the root element is the number of child elements.
###Code
len(root)
###Output
_____no_output_____
###Markdown
An element can be used as an iterator to loop through all of its child elements. The list of child elements only includes `direct` children.
###Code
for child in root:
print(child)
###Output
<Element '{http://www.w3.org/2005/Atom}title' at 0x10b8e7e50>
<Element '{http://www.w3.org/2005/Atom}subtitle' at 0x10b8e7ef0>
<Element '{http://www.w3.org/2005/Atom}id' at 0x10b8ed040>
<Element '{http://www.w3.org/2005/Atom}updated' at 0x10b8ed0e0>
<Element '{http://www.w3.org/2005/Atom}link' at 0x10b8ed220>
<Element '{http://www.w3.org/2005/Atom}entry' at 0x10b8ed2c0>
<Element '{http://www.w3.org/2005/Atom}entry' at 0x10b8eda90>
<Element '{http://www.w3.org/2005/Atom}entry' at 0x10b8edf90>
###Markdown
Attributes Are DictonariesOnce you have a reference to a specific element, you can easily get its attributes as a Python dictionary.
###Code
root.attrib
root[4]
root[4].attrib
root[3]
# The updated element has no attributes,
# so its .attrib is just an empty dictionary.
root[3].attrib
###Output
_____no_output_____
###Markdown
Searching For Nodes Within An XML Document findall()Each element — including the root element, but also child elements — has a `findall()` method. It finds all matching elements among the element’s children.
###Code
tree
# We will need to use the namespace a lot, so we make this shortcut
namespace = '{http://www.w3.org/2005/Atom}'
root.findall(f'{namespace}entry')
root.tag
# This query returns an empty list because the root
# element 'feed' does not have any child element 'feed'
root.findall(f'{namespace}feed')
# This query only finds direct children. The author nodes are nested,
# therefore this query returns an empty list
root.findall(f'{namespace}author')
###Output
_____no_output_____
###Markdown
For convenience, the `tree` object (returned from the `etree.parse()` function) has several methods that mirror the methods on the root element. The results are the same as if you had called the `tree.getroot().findall()` method.
###Code
tree.findall(f'{namespace}entry')
tree.findall(f'{namespace}author')
###Output
_____no_output_____
###Markdown
find()The `find()` method takes an ElementTree query returns the first matching element. This is useful for situations where you are only expecting one match, or if there are multiple matches, you only care about the first one.
###Code
entries = tree.findall(f'{namespace}entry')
len(entries)
# Get the first title, secretly we know there is only one title
title_element = entries[0].find(f'{namespace}title')
title_element.text
###Output
_____no_output_____
###Markdown
There are no elements in this entry named `foo`, so this returns `None`.
###Code
foo_element = entries[0].find(f'{namespace}foo')
foo_element
type(foo_element)
###Output
_____no_output_____
###Markdown
**Beware:** In a boolean context, ElementTree element objects will evaluate to `False` if they contain no children (i.e. if `len(element)` is 0). This means that if `element.find('...')` is not testing whether the find() method found a matching element; it’s testing whether that matching element has any child elements! To test whether the `find()` method returned an element, use if `element.find('...') is not None`. Search for descendant elementsA query like `//{http://www.w3.org/2005/Atom}link` with the two slashes at the beginning finds any elements, regardless of nesting level.
###Code
all_links = tree.findall(f'.//{namespace}link')
all_links
all_links[0].attrib
all_links[1].attrib
all_links[2].attrib
all_links[3].attrib
###Output
_____no_output_____
###Markdown
ElementTree’s `findall()` method is a very powerful feature, but the query language can be a bit surprising. ElementTree’s query language is similar enough to XPath to do basic searching, but dissimilar enough that it may annoy you if you already know XPath. Parsing with lxml`lxml` utilizes the popular `libxml2` parser. It provides a 100% compatible ElementTree API, then extends it with full XPath 1.0 support and a few other niceties.
###Code
# We will need to use the namespace a lot, so we make this shortcut
namespace = '{http://www.w3.org/2005/Atom}'
from lxml import etree
tree = etree.parse('feed.xml')
root = tree.getroot()
root.findall(f'{namespace}entry')
###Output
_____no_output_____
###Markdown
For large XML documents `lxml` is significantly faster than the `built-in` ElementTree library. If you’re only using the ElementTree API and want to use the fastest available implementation, you can try to import `lxml` and fall back to the built-in ElementTree.
###Code
try:
from lxml import etree
except ImportError:
import xml.etree.ElementTree as etree
###Output
_____no_output_____
###Markdown
The following query finds all elements in the Atom namespace, anywhere in the document, that have an `href` attribute. The `//` at the beginning of the query means “elements anywhere (not just as children of the root element).” `{http://www.w3.org/2005/Atom}` means “only elements in the Atom namespace.” `*` means “elements with any local name.” And `[@href]` means “has an href attribute.”
###Code
tree.findall(f'//{namespace}*[@href]')
tree.findall(f"//{namespace}*[@href='http://diveintomark.org/']")
# Using NS as name of the namespace variable is a cool idea
NS = '{http://www.w3.org/2005/Atom}'
###Output
_____no_output_____
###Markdown
The following query searches for Atom `author` elements that have an Atom `uri` element as a child. This only returns two `author` elements, the ones in the first and second `entry`. The `author` in the last `entry` contains only a `name`, not a `uri`.
###Code
tree.findall(f'//{NS}author[{NS}uri]')
###Output
_____no_output_____
###Markdown
XPath support in lxmlTechnically an XPath expressions returns a list of nodes. (Thats what the DOM of a parsed XML document is made up of). Depending on their type, nodes can be elements, attributes, or even text content. To perform XPath queries on namespaced elements, you need to define a namespace prefix mapping. This is just a Python dictionary.
###Code
NSMAP = {'atom': 'http://www.w3.org/2005/Atom'}
###Output
_____no_output_____
###Markdown
The XPath expression searches for `category` elements (in the Atom namespace) that contain a `term` attribute with the value `accessibility`. The `/..` bit means to return the parent element of the category element you just found.So this single XPath query will find all entries with a child element of ``.In this case the `xpath()` function returns a list of ElementTree objects.
###Code
entries = tree.xpath("//atom:category[@term='accessibility']/..",
namespaces=NSMAP)
entries
###Output
_____no_output_____
###Markdown
The following query returns a list that contains a string. It selects text content (`text()`) of the title element (`atom:title`) that is a child of the current element (`./`).
###Code
# Pick the first (and only) element from the entries list
entry = entries[0]
# It is an ElementTree object and therefore supports
entry.xpath('./atom:title/text()', namespaces=NSMAP)
###Output
_____no_output_____
###Markdown
Generating XMLYou can create XML documents from scratch.
###Code
import xml.etree.ElementTree as etree
atom_NS = '{http://www.w3.org/2005/Atom}'
w3_NS = '{http://www.w3.org/XML/1998/namespace}'
###Output
_____no_output_____
###Markdown
To create a new element, instantiate the `Element` class. You pass the element name (namespace + local name) as the first argument. This statement creates a `feed` element in the Atom namespace. This will be our new document’s root element.To add attributes to the newly created element, pass a dictionary of attribute names and values in the `attrib` argument. Note that the attribute name should be in the standard ElementTree format, `{namespace}localname`.
###Code
new_feed = etree.Element(f'{atom_NS}feed',
attrib={f'{w3_NS}lang': 'en'})
###Output
_____no_output_____
###Markdown
At any time, you can serialize any element (and its children) with the ElementTree `tostring()` function.
###Code
print(etree.tostring(new_feed))
###Output
b'<ns0:feed xmlns:ns0="http://www.w3.org/2005/Atom" xml:lang="en" />'
###Markdown
Default namespacesA default namespace is useful for documents — like Atom feeds — where every element is in the same namespace. The namespace is declared once and each element just needs to be declared with its local name (``, ``, ``). There is no need to use any prefixes unless you want to declare elements from another namespace. The first snippet has an default, implicit namespace.```xml```Whereas the second, this is how `ElementTree` serializes namespaced XML elements, has an explicit namespace. This is technically accurate, but a bit cumbersome to work with. Both `DOM`s of the serialisations in the example are identical.```xml``` `lxml` does offer fine-grained control over how namespaced elements are serialized. The built-in `ElementTree` does not.
###Code
# We import lxml's etree like this, to make it recognizeable
# in the example
import lxml.etree
###Output
_____no_output_____
###Markdown
Define a namespace mapping as a dictionary. Dictionary values are namespaces; dictionary keys are the desired prefix. Using `None` as a prefix effectively declares a default namespace.
###Code
NSMAP = {None: 'http://www.w3.org/2005/Atom'}
###Output
_____no_output_____
###Markdown
Now you can pass the `lxml`-specific `nsmap` argument when you create an element, and `lxml` will respect the namespace prefixes you’ve defined.
###Code
new_feed = lxml.etree.Element('feed', nsmap=NSMAP)
###Output
_____no_output_____
###Markdown
This serialization defines the Atom namespace as the default namespace and declares the feed element without a namespace prefix.
###Code
print(lxml.etree.tounicode(new_feed))
# Aha, .tounicode() would be one way to get a string instead of
# a byte object
print(lxml.etree.tostring(new_feed))
###Output
b'<feed xmlns="http://www.w3.org/2005/Atom"/>'
###Markdown
You can always add attributes to any element with the `set()` method. It takes two arguments: the attribute name in standard ElementTree format, then the attribute value. This method is not `lxml`-specific.
###Code
new_feed.set('{http://www.w3.org/XML/1998/namespace}lang', 'en')
print(lxml.etree.tounicode(new_feed))
###Output
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"/>
###Markdown
Create child elementsInstantiate the `SubElement` class to create a child element of an existing element. The only required arguments are the parent element (`new_feed` in this case) and the new element’s name. Since this child element will inherit the namespace mapping of its parent, there is no need to redeclare the namespace or prefix here.You can also pass in an attribute dictionary. Keys are attribute names; values are attribute values.
###Code
title = lxml.etree.SubElement(new_feed, 'title',
attrib={'type':'html'})
print(lxml.etree.tounicode(new_feed))
###Output
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><title type="html"/></feed>
###Markdown
Set the `.text` property to add the text content to an element.
###Code
title.text = 'dive into …'
print(lxml.etree.tounicode(new_feed))
print(lxml.etree.tounicode(new_feed, pretty_print=True))
###Output
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
<title type="html">dive into &hellip;</title>
</feed>
###Markdown
Parsing broken xml`lxml` is capable of parsing not wellformed xml documents.The parser chokes on this document, because because the `…` entity is not defined in XML.
###Code
import lxml.etree
tree = lxml.etree.parse('broken-feed.xml')
###Output
_____no_output_____
###Markdown
Instantiate the `lxml.etree.XMLParser` class to create a custom parser. It can take a number of different named arguments. Here we are using the `recover` argument, so that the XML parser will try its best to “recover” from wellformedness errors.
###Code
parser = lxml.etree.XMLParser(recover=True)
###Output
_____no_output_____
###Markdown
This works! The second argument of `parse()` is the custom parser.
###Code
tree = lxml.etree.parse('broken-feed.xml', parser)
###Output
_____no_output_____
###Markdown
The parser keeps a log of the wellformedness errors that it has encountered.
###Code
parser.error_log
tree.findall('{http://www.w3.org/2005/Atom}title')
###Output
_____no_output_____
###Markdown
The parser just dropped the undefined `…` entity. The text content of the title element becomes 'dive into '.
###Code
title = tree.findall('{http://www.w3.org/2005/Atom}title')[0]
title.text
###Output
_____no_output_____
###Markdown
As you can see from the serialization, the … entity didn’t get moved; it was simply dropped.
###Code
print(lxml.etree.tounicode(tree.getroot()))
###Output
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
<title>dive into </title>
</feed>
|
legacy/from_scratch.ipynb | ###Markdown
this warning is OK because kmcuda is robust to NaN-s in input
###Code
repost_normed[1, :].multiply(repost_normed[1, :]).sum()
repost_normed
with open("repost_decorr_sparse.pickle", "wb") as fout:
pickle.dump((repos, repost_normed), fout, protocol=-1)
del repost, repost_normed
with open("cluster.py", "w") as fout:
fout.write("""import pickle, libKMCUDA
with open("repost_decorr_sparse.pickle", "rb") as fin:
_, repost = pickle.load(fin)
repost = repost.todense()
dists = []
for k in range(100, 5100, 100):
_, _, average_distance = libKMCUDA.kmeans_cuda(
repost, k, yinyang_t=0, metric="angular", verbosity=1, seed=777, average_distance=True)
print("Average distance:", average_distance)
dists.append(average_distance)
with open("distances.pickle", "wb") as fout:
pickle.dump(dists, fout, protocol=-1)
""")
!python3 cluster.py
with open("distances.pickle", "rb") as fin:
dists = pickle.load(fin)
rcParams["figure.figsize"] = (9, 6)
plot(arange(100, 5100, 100), dists)
title("K-means average intra-cluster distance (topic space, 320 dims)")
xlabel("K")
ylabel("Distance, radians")
# the best K value
K = 3000
with open("cluster.py", "w") as fout:
fout.write("""import pickle, libKMCUDA
with open("repost_decorr_sparse.pickle", "rb") as fin:
_, repost = pickle.load(fin)
repost = repost.todense()
dists = []
centroids, assignments = libKMCUDA.kmeans_cuda(
repost, 3000, metric="angular", verbosity=2, seed=777, tolerance=0.001)
with open("topic_clusters_320_decorr.pickle", "wb") as fout:
pickle.dump((centroids, assignments), fout, protocol=-1)
""")
!python3 cluster.py
with open("topic_clusters_320_decorr.pickle", "rb") as fin:
centroids, assignments = pickle.load(fin)
from sklearn.manifold import TSNE
# NaN centroids which were suppressed by surroundings
(centroids[:, 0] != centroids[:, 0]).sum()
centroids_fixed = centroids[~isnan(centroids).any(axis=1)]
def angular_distance(x, y):
return arccos(min(x.dot(y), 1))
from sklearn.metrics.pairwise import pairwise_distances
cdists = pairwise_distances(centroids_fixed, centroids_fixed, metric=angular_distance)
model = TSNE(random_state=777, metric="precomputed", n_iter=5000)
embeddings = model.fit_transform(cdists)
rcParams["figure.figsize"] = (9, 9)
scatter(embeddings[:, 0], embeddings[:, 1], alpha=0.5)
sqrt(3000)
54 * 54
with open("cluster.py", "w") as fout:
fout.write("""import pickle, libKMCUDA
with open("repost_decorr_sparse.pickle", "rb") as fin:
_, repost = pickle.load(fin)
repost = repost.todense()
dists = []
centroids, assignments = libKMCUDA.kmeans_cuda(
repost, 2916 + 2, metric="angular", verbosity=2, seed=777, tolerance=0.001)
with open("topic_clusters_320_decorr.pickle", "wb") as fout:
pickle.dump((centroids, assignments), fout, protocol=-1)
""")
%time !python3 cluster.py
with open("topic_clusters_320_decorr.pickle", "rb") as fin:
centroids, assignments = pickle.load(fin)
(centroids[:, 0] != centroids[:, 0]).sum()
centroids_fixed = centroids[~isnan(centroids).any(axis=1)]
centroids_fixed.shape
cdists = pairwise_distances(centroids_fixed, centroids_fixed, metric=angular_distance)
model = TSNE(random_state=777, metric="precomputed", n_iter=5000)
embeddings = model.fit_transform(cdists)
scatter(embeddings[:, 0], embeddings[:, 1], alpha=0.5)
import lapjv
from scipy.spatial.distance import cdist
grid = dstack(meshgrid(linspace(0, 1, 54), linspace(0, 1, 54))).reshape(-1, 2)
scatter(grid[:,0], grid[:,1])
embeddings -= embeddings.min(axis=0)
embeddings /= embeddings.max(axis=0)
cost_matrix_topics = cdist(grid, embeddings, "sqeuclidean").astype(float32)
cost_matrix_topics = cost_matrix_topics * (100000 / cost_matrix_topics.max())
cmt_sorted = cost_matrix_topics.flatten()
cmt_sorted.sort()
cmt_diff = diff(cmt_sorted)
cmt_diff.min(), cmt_diff.max()
%time row_assigns_topics, col_assigns_topics, _ = lapjv.lapjv(cost_matrix_topics)
grid_jv = grid[col_assigns_topics]
for start, end in zip(embeddings, grid_jv):
arrow(start[0], start[1], end[0] - start[0], end[1] - start[1],
head_length=0.01, head_width=0.01)
%cd ..
!rm bigartm.*
with open("devs.pickle", "rb") as fin:
devs = pickle.load(fin)
len(devs)
for dev in devs:
if "[email protected]" in dev[0][1]:
maximos_repos = dev[1]
break
maximos_repos
repo_index = {r: i for i, r in enumerate(repos)}
maximos_repos = [(repo_index[r[0]], r[1]) for r in maximos_repos if r[0] in repo_index]
len(maximos_repos)
maximos_clusters = [(assignments[r[0]], r[1]) for r in maximos_repos]
from itertools import groupby
maximos_clusters = [[k, sum(c[1] for c in g)] for k, g in groupby(sorted(maximos_clusters), lambda c: c[0])]
len(maximos_clusters)
where(isnan(centroids).any(axis=1))
for c in maximos_clusters:
if c[0] > 1742:
c[0] -= 2
elif c[0] > 1414:
c[0] -= 1
profile = zeros((54, 54), dtype=float32)
for c in maximos_clusters:
profile[tuple((grid_jv[c[0]] * (54 - 1)).astype(int))] = c[1]
imshow(profile, interpolation="nearest", cmap="Blues")
title("Maximo's profile in the topic space")
maximos_clusters_inv = [(p[1], p[0]) for p in maximos_clusters]
maximos_clusters_inv.sort(reverse=True)
maximos_clusters_inv[:10]
for r in maximos_repos:
if assignments[r[0]] == 1336:
print(repos[r[0]])
###Output
skeetr/skeetrd
tyba/beanstool
src-d/fsbench
mcuadros/ofelia
mcuadros/gce-docker
###Markdown
Pure Go
###Code
for r in maximos_repos:
if assignments[r[0]] == 827:
print(repos[r[0]])
###Output
mcuadros/go-jsonschema-generator
mcuadros/go-candyjs
mcuadros/go-etcd-hydrator
mcuadros/go-lookup
###Markdown
Go with Javascript flavor
###Code
for r in maximos_repos:
if assignments[r[0]] == 2074 + 2:
print(repos[r[0]])
###Output
mcuadros/go-rat
mcuadros/go-raa
mcuadros/go-crxmake
src-d/go-git
|
doc/source/tutorials/aggregating_downscaling_consistency.ipynb | ###Markdown
Aggregating and downscaling timeseries dataThe **pyam** package offers many tools to facilitate processing of scenario data.In this notebook, we illustrate methods to aggregate and downscale timeseries data of an `IamDataFrame` across regions and sectors, as well as checking consistency of given data along these dimensions.In this tutorial, we show how to make the most of **pyam** to compute such aggregate timeseries data, and to check that a scenario ensemble (or just a single scenario) is complete and that timeseries data "add up" across regions and along the variable tree (i.e., that the sum of values of the subcategories such as `Primary Energy|*` are identical to the values of the category `Primary Energy`).There are two distinct use cases where these features can be used. Use case 1: compute data at higher/lower sectoral or spatial aggregationGiven scenario results at a specific (usually very detailed) sectoral and spatial resolution, **pyam** offers a suite of functions to easily compute aggregate timeseries. For example, this allows to sum up national energy demand to regional or global values,or to compute the average of a global carbon price weighted by regional emissions.These functions can be used as part of an automated workflow to generate complete scenario results from raw model outputs. Use case 2: check the consistency of data across sectoral or spatial levelsIn model comparison exercises or ensemble compilation projects, a user needs to verify the internal consistency of submitted scenario results (cf. Huppmann et al., 2018, doi: [10.1038/s41558-018-0317-4](http://rdcu.be/9i8a)).Such inconsistencies can be due to incomplete variable hierarchies, reporting templates incompatible with model specifications, or user error. OverviewThis notebook illustrates the following features:0. Load timeseries data from a snapshot file and inspect the scenario1. Aggregate timeseries over sectors (i.e., sub-categories)2. Aggregate timeseries over regions including weighted average3. Downscale timeseries given at a region level to sub-regions using a proxy variable4. Check the internal consistency of a scenario (ensemble)
###Code
import pandas as pd
import pyam
###Output
_____no_output_____
###Markdown
0. Load timeseries data from snapshot file and inspect the scenarioThe stylized scenario used in this tutorial has data for two regions (`reg_a` & `reg_b`) as well as the `World` aggregate, and for categories of variables: primary energy demand, emissions, carbon price, and population.
###Code
df = pyam.IamDataFrame(data='tutorial_data_aggregating_downscaling.csv')
df.regions()
df.variables()
###Output
_____no_output_____
###Markdown
1. Aggregating timeseries across sectorsLet's first display the data for the components of primary energy demand.
###Code
df.filter(variable='Primary Energy|*').timeseries()
###Output
_____no_output_____
###Markdown
Next, we are going to use the [aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api.htmlpyam.IamDataFrame.aggregate) function to compute the total `Primary Energy` from its components (wind and coal) in each region (including `World`).The function returns an `IamDataFrame`, so we can use [timeseries()](https://pyam-iamc.readthedocs.io/en/stable/api.htmlpyam.IamDataFrame.timeseries) to display the resulting data.
###Code
df.aggregate('Primary Energy').timeseries()
###Output
_____no_output_____
###Markdown
If we are interested in **use case 1**, we could use the argument `append=True` to directly add the computed aggregate to the `IamDataFrame`.However, in this tutorial, the data already includes the total primary energy demand. Therefore, we illustrate **use case 2** and apply the [check_aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api.htmlpyam.IamDataFrame.check_aggregate) function to verify whether a given variable is the sum of its sectoral components(i.e., `Primary Energy` should be equal to `Primary Energy|Coal` plus `Primary Energy|Wind`).The validation is performed separately for each region.The function returns `None` if the validation is correct (which it is for primary energy demand)or a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) highlighting where the aggregate does not match (this will be illustrated in the next section).
###Code
df.check_aggregate('Primary Energy')
###Output
_____no_output_____
###Markdown
The function also returns useful logging messages when there is nothing to check (because there are no sectors below `Primary Energy|Wind`).
###Code
df.check_aggregate('Primary Energy|Wind')
###Output
_____no_output_____
###Markdown
2. Aggregating timeseries across subregionsSimilarly to the previous example, we now use the [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api.htmlpyam.IamDataFrame.aggregate_region) function to compute regional aggregates.By default, this method sums all the regions in the dataframe to make a `World` region; this can be changed with the keyword arguments `region` and `subregions`.
###Code
df.aggregate_region('Primary Energy').timeseries()
###Output
_____no_output_____
###Markdown
Adding regional componentsAs a next step, we use [check_aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api.htmlpyam.IamDataFrame.check_aggregate_region) to verify that the regional aggregate of CO2 emissions matches the timeseries data given in the scenario.
###Code
df.check_aggregate_region('Emissions|CO2')
###Output
_____no_output_____
###Markdown
As announced above, this validation failed and we see a dataframe of the expected data at the `region` level and the aggregation computed from the `subregions`.Let's look at the entire emissions timeseries in the scenario to find out what is going on.
###Code
df.filter(variable='Emissions*').timeseries()
###Output
_____no_output_____
###Markdown
Investigating the data carefully, you will notice that emissions from the energy sector and agriculture, forestry & land use (AFOLU) are given in the subregions and the `World` region, whereas emissions from bunker fuels are only defined at the global level.This is a common issue in emissions data, where some sources (e.g., global aviation and maritime transport) cannot be attributed to one region.Luckily, the functions [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api.htmlpyam.IamDataFrame.aggregate_region)and [check_aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api.htmlpyam.IamDataFrame.check_aggregate_region)support this use case:by adding `components=True`, the regional aggregation will include any sub-categories of the variable that are only present at the `region` level but not in any subregion.
###Code
df.aggregate_region('Emissions|CO2', components=True).timeseries()
###Output
_____no_output_____
###Markdown
The regional aggregate now matches the data given at the `World` level in the tutorial data.Note that the components to be included at the region level can also be specified directly via a list of variables, in this case we would use `components=['Emissions|CO2|Bunkers']`. Computing a weighted average across regionsOne other frequent requirement when aggregating across regions is a weighted average.To illustrate this feature, the tutorial data includes carbon price data.Naturally, the appropriate weighting data are the regional carbon emissions.The following cells show:0. The carbon price data across the regions1. A (failing) validation that the regional aggretion (without weights) matches the reported prices at the `World` level2. The emissions-weighted average of carbon prices returned as a new `IamDataFrame`
###Code
df.filter(variable='Price|Carbon').timeseries()
df.check_aggregate_region('Price|Carbon')
df.aggregate_region('Price|Carbon', weight='Emissions|CO2').timeseries()
###Output
_____no_output_____
###Markdown
3. Downscaling timeseries data to subregionsThe inverse operation of regional aggregation is "downscaling" of timeseries data given at a regional level to a number of subregions, usually using some other data as proxy to divide and allocate the total to the subregions.This section shows an example using the [downscale_region()](https://pyam-iamc.readthedocs.io/en/stable/api.htmlpyam.IamDataFrame.downscale_region) function to divide the total primary energy demand using population as a proxy.
###Code
df.downscale_region('Primary Energy', proxy='Population').timeseries()
###Output
_____no_output_____
###Markdown
By the way, the functions[aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api.htmlpyam.IamDataFrame.aggregate), [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api.htmlpyam.IamDataFrame.aggregate_region) and[downscale_region()](https://pyam-iamc.readthedocs.io/en/stable/api.htmlpyam.IamDataFrame.downscale_region)also take lists of variables as `variable` argument.See the next cell for an example.
###Code
var_list = ['Primary Energy', 'Primary Energy|Coal']
df.downscale_region(var_list, proxy='Population').timeseries()
###Output
_____no_output_____
###Markdown
4. Checking the internal consistency of a scenario (ensemble)The previous sections illustrated two functions to validate specific variables across their sectors (sub-categories) or regional disaggregation.These two functions are combined in the [check_internal_consistency()](https://pyam-iamc.readthedocs.io/en/stable/api.htmlpyam.IamDataFrame.check_internal_consistency) feature.This feature of the **pyam** package currently only supports "consistency"in the sense of a strictly hierarchical variable tree(with subcategories summing up to the category value including components, discussed above)and that all the regions sum to the ``World`` region. See [this issue](https://github.com/IAMconsortium/pyam/issues/106) for more information.If we have an internally consistent scenario ensemble (or single scenario), the function will return `None`; otherwise, it will return a concatenation of [pandas.DataFrames](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) indicating all detected inconsistencies.For this section, we use a tutorial scenario which is constructed to highlight the individual validation features below.The scenario below has two inconsistencies:1. In year `2010` and regions `region_b` & `World`, the values of coal and wind do not add up to the total `Primary Energy` value2. In year `2020` in the `World` region, the value of `Primary Energy` and `Primary Energy|Coal` is not the sum of `region_a` and `region_b` (but the sum of wind and coal to `Primary Energy` in each sub-region is correct)
###Code
tutorial_df = pyam.IamDataFrame(pd.DataFrame([
['World', 'Primary Energy', 'EJ/yr', 7, 15],
['World', 'Primary Energy|Coal', 'EJ/yr', 4, 11],
['World', 'Primary Energy|Wind', 'EJ/yr', 2, 4],
['region_a', 'Primary Energy', 'EJ/yr', 4, 8],
['region_a', 'Primary Energy|Coal', 'EJ/yr', 2, 6],
['region_a', 'Primary Energy|Wind', 'EJ/yr', 2, 2],
['region_b', 'Primary Energy', 'EJ/yr', 3, 6],
['region_b', 'Primary Energy|Coal', 'EJ/yr', 2, 4],
['region_b', 'Primary Energy|Wind', 'EJ/yr', 0, 2],
],
columns=['region', 'variable', 'unit', 2010, 2020]
), model='model_a', scenario='scen_a')
###Output
_____no_output_____
###Markdown
All checking-functions take arguments for [np.is_close()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.isclose.html) as keyword arguments. We show our recommended settings and how to use them here.
###Code
np_isclose_args = {
'equal_nan': True,
'rtol': 1e-03,
'atol': 1e-05,
}
tutorial_df.check_internal_consistency(**np_isclose_args)
###Output
_____no_output_____
###Markdown
Aggregating and downscaling timeseries dataThe **pyam** package offers many tools to facilitate processing of scenario data.In this notebook, we illustrate methods to aggregate and downscale timeseries data of an `IamDataFrame` across regions and sectors, as well as checking consistency of given data along these dimensions.In this tutorial, we show how to make the most of **pyam** to compute such aggregate timeseries data, and to check that a scenario ensemble (or just a single scenario) is complete and that timeseries data "add up" across regions and along the variable tree (i.e., that the sum of values of the subcategories such as `Primary Energy|*` are identical to the values of the category `Primary Energy`).There are two distinct use cases where these features can be used. Use case 1: compute data at higher/lower sectoral or spatial aggregationGiven scenario results at a specific (usually very detailed) sectoral and spatial resolution, **pyam** offers a suite of functions to easily compute aggregate timeseries. For example, this allows to sum up national energy demand to regional or global values,or to compute the average of a global carbon price weighted by regional emissions.These functions can be used as part of an automated workflow to generate complete scenario results from raw model outputs. Use case 2: check the consistency of data across sectoral or spatial levelsIn model comparison exercises or ensemble compilation projects, a user needs to verify the internal consistency of submitted scenario results (cf. Huppmann et al., 2018, doi: [10.1038/s41558-018-0317-4](http://rdcu.be/9i8a)).Such inconsistencies can be due to incomplete variable hierarchies, reporting templates incompatible with model specifications, or user error. OverviewThis notebook illustrates the following features:0. Load timeseries data from a snapshot file and inspect the scenario1. Aggregate timeseries over sectors (i.e., sub-categories)2. Aggregate timeseries over regions including weighted average3. Downscale timeseries given at a region level to sub-regions using a proxy variable4. Downscale timeseries using an explicit weighting dataframe5. Check the internal consistency of a scenario (ensemble)
###Code
import pandas as pd
import pyam
###Output
_____no_output_____
###Markdown
0. Load timeseries data from snapshot file and inspect the scenarioThe stylized scenario used in this tutorial has data for two regions (`reg_a` & `reg_b`) as well as the `World` aggregate, and for categories of variables: primary energy demand, emissions, carbon price, and population.
###Code
df = pyam.IamDataFrame(data='tutorial_data_aggregating_downscaling.csv')
df.regions()
df.variables()
###Output
_____no_output_____
###Markdown
1. Aggregating timeseries across sectorsLet's first display the data for the components of primary energy demand.
###Code
df.filter(variable='Primary Energy|*').timeseries()
###Output
_____no_output_____
###Markdown
Next, we are going to use the [aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate) function to compute the total `Primary Energy` from its components (wind and coal) in each region (including `World`).The function returns an `IamDataFrame`, so we can use [timeseries()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.timeseries) to display the resulting data.
###Code
df.aggregate('Primary Energy').timeseries()
###Output
_____no_output_____
###Markdown
If we are interested in **use case 1**, we could use the argument `append=True` to directly add the computed aggregate to the `IamDataFrame`.However, in this tutorial, the data already includes the total primary energy demand. Therefore, we illustrate **use case 2** and apply the [check_aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_aggregate) function to verify whether a given variable is the sum of its sectoral components(i.e., `Primary Energy` should be equal to `Primary Energy|Coal` plus `Primary Energy|Wind`).The validation is performed separately for each region.The function returns `None` if the validation is correct (which it is for primary energy demand)or a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) highlighting where the aggregate does not match (this will be illustrated in the next section).
###Code
df.check_aggregate('Primary Energy')
###Output
_____no_output_____
###Markdown
The function also returns useful logging messages when there is nothing to check (because there are no sectors below `Primary Energy|Wind`).
###Code
df.check_aggregate('Primary Energy|Wind')
###Output
_____no_output_____
###Markdown
2. Aggregating timeseries across subregionsSimilarly to the previous example, we now use the [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate_region) function to compute regional aggregates.By default, this method sums all the regions in the dataframe to make a `World` region; this can be changed with the keyword arguments `region` and `subregions`.
###Code
df.aggregate_region('Primary Energy').timeseries()
###Output
_____no_output_____
###Markdown
Adding regional componentsAs a next step, we use [check_aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_aggregate_region) to verify that the regional aggregate of CO2 emissions matches the timeseries data given in the scenario.
###Code
df.check_aggregate_region('Emissions|CO2')
###Output
_____no_output_____
###Markdown
As announced above, this validation failed and we see a dataframe of the expected data at the `region` level and the aggregation computed from the `subregions`.Let's look at the entire emissions timeseries in the scenario to find out what is going on.
###Code
df.filter(variable='Emissions*').timeseries()
###Output
_____no_output_____
###Markdown
Investigating the data carefully, you will notice that emissions from the energy sector and agriculture, forestry & land use (AFOLU) are given in the subregions and the `World` region, whereas emissions from bunker fuels are only defined at the global level.This is a common issue in emissions data, where some sources (e.g., global aviation and maritime transport) cannot be attributed to one region.Luckily, the functions [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate_region)and [check_aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_aggregate_region)support this use case:by adding `components=True`, the regional aggregation will include any sub-categories of the variable that are only present at the `region` level but not in any subregion.
###Code
df.aggregate_region('Emissions|CO2', components=True).timeseries()
###Output
_____no_output_____
###Markdown
The regional aggregate now matches the data given at the `World` level in the tutorial data.Note that the components to be included at the region level can also be specified directly via a list of variables, in this case we would use `components=['Emissions|CO2|Bunkers']`. Computing a weighted average across regionsOne other frequent requirement when aggregating across regions is a weighted average.To illustrate this feature, the tutorial data includes carbon price data.Naturally, the appropriate weighting data are the regional carbon emissions.The following cells show:0. The carbon price data across the regions1. A (failing) validation that the regional aggretion (without weights) matches the reported prices at the `World` level2. The emissions-weighted average of carbon prices returned as a new `IamDataFrame`
###Code
df.filter(variable='Price|Carbon').timeseries()
df.check_aggregate_region('Price|Carbon')
df.aggregate_region('Price|Carbon', weight='Emissions|CO2').timeseries()
###Output
_____no_output_____
###Markdown
3. Downscaling timeseries data to subregions using a proxyThe inverse operation of regional aggregation is "downscaling" of timeseries data given at a regional level to a number of subregions, usually using some other data as proxy to divide and allocate the total to the subregions.This section shows an example using the [downscale_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.downscale_region) function to divide the total primary energy demand using population as a proxy.
###Code
df.filter(variable='Population').timeseries()
df.downscale_region('Primary Energy', proxy='Population').timeseries()
###Output
_____no_output_____
###Markdown
By the way, the functions[aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate), [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate_region) and[downscale_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.downscale_region)also take lists of variables as `variable` argument.See the next cell for an example.
###Code
var_list = ['Primary Energy', 'Primary Energy|Coal']
df.downscale_region(var_list, proxy='Population').timeseries()
###Output
_____no_output_____
###Markdown
4. Downscaling timeseries data to subregions using a weighting dataframeIn cases where using existing data directly as a proxy (as illustrated in the previous section) is not practical,a user can also create a weighting dataframe and pass that directly to the `downscale_region()` function.The example below uses the weighting factors implied by the population variable for easy comparison to the previous section.
###Code
weight = pd.DataFrame(
[[0.66, 0.6], [0.33, 0.4]],
index=pd.Series(['reg_a', 'reg_b'], name='region'),
columns=pd.Series([2005, 2010], name='year')
)
weight
df.downscale_region(var_list, weight=weight).timeseries()
###Output
_____no_output_____
###Markdown
5. Checking the internal consistency of a scenario (ensemble)The previous sections illustrated two functions to validate specific variables across their sectors (sub-categories) or regional disaggregation.These two functions are combined in the [check_internal_consistency()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_internal_consistency) feature.This feature of the **pyam** package currently only supports "consistency"in the sense of a strictly hierarchical variable tree(with subcategories summing up to the category value including components, discussed above)and that all the regions sum to the ``World`` region. See [this issue](https://github.com/IAMconsortium/pyam/issues/106) for more information.If we have an internally consistent scenario ensemble (or single scenario), the function will return `None`; otherwise, it will return a concatenation of [pandas.DataFrames](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) indicating all detected inconsistencies.For this section, we use a tutorial scenario which is constructed to highlight the individual validation features below.The scenario below has two inconsistencies:1. In year `2010` and regions `region_b` & `World`, the values of coal and wind do not add up to the total `Primary Energy` value2. In year `2020` in the `World` region, the value of `Primary Energy` and `Primary Energy|Coal` is not the sum of `region_a` and `region_b` (but the sum of wind and coal to `Primary Energy` in each sub-region is correct)
###Code
tutorial_df = pyam.IamDataFrame(pd.DataFrame([
['World', 'Primary Energy', 'EJ/yr', 7, 15],
['World', 'Primary Energy|Coal', 'EJ/yr', 4, 11],
['World', 'Primary Energy|Wind', 'EJ/yr', 2, 4],
['region_a', 'Primary Energy', 'EJ/yr', 4, 8],
['region_a', 'Primary Energy|Coal', 'EJ/yr', 2, 6],
['region_a', 'Primary Energy|Wind', 'EJ/yr', 2, 2],
['region_b', 'Primary Energy', 'EJ/yr', 3, 6],
['region_b', 'Primary Energy|Coal', 'EJ/yr', 2, 4],
['region_b', 'Primary Energy|Wind', 'EJ/yr', 0, 2],
],
columns=['region', 'variable', 'unit', 2010, 2020]
), model='model_a', scenario='scen_a')
###Output
_____no_output_____
###Markdown
All checking-functions take arguments for [np.is_close()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.isclose.html) as keyword arguments. We show our recommended settings and how to use them here.
###Code
np_isclose_args = {
'equal_nan': True,
'rtol': 1e-03,
'atol': 1e-05,
}
tutorial_df.check_internal_consistency(**np_isclose_args)
###Output
_____no_output_____
###Markdown
Aggregating and downscaling timeseries dataThe **pyam** package offers many tools to facilitate processing of scenario data.In this notebook, we illustrate methods to aggregate and downscale timeseries data of an **IamDataFrame** across regions and sectors, as well as checking consistency of given data along these dimensions.In this tutorial, we show how to make the most of **pyam** to compute such aggregate timeseries data, and to check that a scenario ensemble (or just a single scenario) is complete and that timeseries data "add up" across regions and along the variable tree (i.e., that the sum of values of the subcategories such as `Primary Energy|*` are identical to the values of the category `Primary Energy`).There are two distinct use cases where these features can be used. Use case 1: compute data at higher/lower sectoral or spatial aggregationGiven scenario results at a specific (usually very detailed) sectoral and spatial resolution, **pyam** offers a suite of functions to easily compute aggregate timeseries. For example, this allows to sum up national energy demand to regional or global values,or to compute the average of a global carbon price weighted by regional emissions.These functions can be used as part of an automated workflow to generate complete scenario results from raw model outputs. Use case 2: check the consistency of data across sectoral or spatial levelsIn model comparison exercises or ensemble compilation projects, a user needs to verify the internal consistency of submitted scenario results (cf. Huppmann et al., 2018, doi: [10.1038/s41558-018-0317-4](http://rdcu.be/9i8a)).Such inconsistencies can be due to incomplete variable hierarchies, reporting templates incompatible with model specifications, or user error. OverviewThis notebook illustrates the following features:0. Import data from file and inspect the scenario1. Aggregate timeseries over sectors (i.e., sub-categories)2. Aggregate timeseries over regions including weighted average3. Downscale timeseries given at a region level to sub-regions using a proxy variable4. Downscale timeseries using an explicit weighting dataframe5. Check the internal consistency of a scenario (ensemble)**See Also**The **pyam** package also supports algebraic operations (addition, subtraction, multiplication, division)on the timeseries data along any axis or dimension.See the [algebraic operations tutorial notebook](https://pyam-iamc.readthedocs.io/en/stable/tutorials/algebraic_operations.html)for more information.
###Code
import pandas as pd
import pyam
###Output
_____no_output_____
###Markdown
0. Import data from file and inspect the scenarioThe stylized scenario used in this tutorial has data for two regions (`reg_a` & `reg_b`) as well as the `World` aggregate, and for categories of variables: primary energy demand, emissions, carbon price, and population.
###Code
df = pyam.IamDataFrame(data='tutorial_data_aggregating_downscaling.csv')
df.region
df.variable
###Output
_____no_output_____
###Markdown
1. Aggregating timeseries across sectorsLet's first display the data for the components of primary energy demand.
###Code
df.filter(variable='Primary Energy|*').timeseries()
###Output
_____no_output_____
###Markdown
Next, we are going to use the [aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate) function to compute the total `Primary Energy` from its components (wind and coal) in each region (including `World`).The function returns an **IamDataFrame**, so we can use [timeseries()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.timeseries) to display the resulting data.
###Code
df.aggregate('Primary Energy').timeseries()
###Output
_____no_output_____
###Markdown
If we are interested in **use case 1**, we could use the argument `append=True` to directly add the computed aggregate to the **IamDataFrame** instance.However, in this tutorial, the data already includes the total primary energy demand. Therefore, we illustrate **use case 2** and apply the [check_aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_aggregate) function to verify whether a given variable is the sum of its sectoral components(i.e., `Primary Energy` should be equal to `Primary Energy|Coal` plus `Primary Energy|Wind`).The validation is performed separately for each region.The function returns `None` if the validation is correct (which it is for primary energy demand)or a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) highlighting where the aggregate does not match (this will be illustrated in the next section).
###Code
df.check_aggregate('Primary Energy')
###Output
_____no_output_____
###Markdown
The function also returns useful logging messages when there is nothing to check (because there are no sectors below `Primary Energy|Wind`).
###Code
df.check_aggregate('Primary Energy|Wind')
###Output
_____no_output_____
###Markdown
2. Aggregating timeseries across subregionsSimilarly to the previous example, we now use the [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate_region) function to compute regional aggregates.By default, this method sums all the regions in the dataframe to make a `World` region; this can be changed with the keyword arguments `region` and `subregions`.
###Code
df.aggregate_region('Primary Energy').timeseries()
###Output
_____no_output_____
###Markdown
Adding regional componentsAs a next step, we use [check_aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_aggregate_region) to verify that the regional aggregate of CO2 emissions matches the timeseries data given in the scenario.
###Code
df.check_aggregate_region('Emissions|CO2')
###Output
_____no_output_____
###Markdown
As announced above, this validation failed and we see a dataframe of the expected data at the `region` level and the aggregation computed from the `subregions`.Let's look at the entire emissions timeseries in the scenario to find out what is going on.
###Code
df.filter(variable='Emissions*').timeseries()
###Output
_____no_output_____
###Markdown
Investigating the data carefully, you will notice that emissions from the energy sector and agriculture, forestry & land use (AFOLU) are given in the subregions and the `World` region, whereas emissions from bunker fuels are only defined at the global level.This is a common issue in emissions data, where some sources (e.g., global aviation and maritime transport) cannot be attributed to one region.Luckily, the functions [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate_region)and [check_aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_aggregate_region)support this use case:by adding `components=True`, the regional aggregation will include any sub-categories of the variable that are only present at the `region` level but not in any subregion.
###Code
df.aggregate_region('Emissions|CO2', components=True).timeseries()
###Output
_____no_output_____
###Markdown
The regional aggregate now matches the data given at the `World` level in the tutorial data.Note that the components to be included at the region level can also be specified directly via a list of variables, in this case we would use `components=['Emissions|CO2|Bunkers']`. Computing a weighted average across regionsOne other frequent requirement when aggregating across regions is a weighted average.To illustrate this feature, the tutorial data includes carbon price data.Naturally, the appropriate weighting data are the regional carbon emissions.The following cells show:0. The carbon price data across the regions1. A (failing) validation that the regional aggretion (without weights) matches the reported prices at the `World` level2. The emissions-weighted average of carbon prices returned as a new **IamDataFrame**
###Code
df.filter(variable='Price|Carbon').timeseries()
df.check_aggregate_region('Price|Carbon')
df.aggregate_region('Price|Carbon', weight='Emissions|CO2').timeseries()
###Output
_____no_output_____
###Markdown
3. Downscaling timeseries data to subregions using a proxyThe inverse operation of regional aggregation is "downscaling" of timeseries data given at a regional level to a number of subregions, usually using some other data as proxy to divide and allocate the total to the subregions.This section shows an example using the [downscale_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.downscale_region) function to divide the total primary energy demand using population as a proxy.
###Code
df.filter(variable='Population').timeseries()
df.downscale_region('Primary Energy', proxy='Population').timeseries()
###Output
_____no_output_____
###Markdown
By the way, the functions[aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate), [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate_region) and[downscale_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.downscale_region)also take lists of variables as `variable` argument.See the next cell for an example.
###Code
var_list = ['Primary Energy', 'Primary Energy|Coal']
df.downscale_region(var_list, proxy='Population').timeseries()
###Output
_____no_output_____
###Markdown
4. Downscaling timeseries data to subregions using a weighting dataframeIn cases where using existing data directly as a proxy (as illustrated in the previous section) is not practical,a user can also create a weighting dataframe and pass that directly to the `downscale_region()` function.The example below uses the weighting factors implied by the population variable for easy comparison to the previous section.
###Code
weight = pd.DataFrame(
[[0.66, 0.6], [0.33, 0.4]],
index=pd.Series(['reg_a', 'reg_b'], name='region'),
columns=pd.Series([2005, 2010], name='year')
)
weight
df.downscale_region(var_list, weight=weight).timeseries()
###Output
_____no_output_____
###Markdown
5. Checking the internal consistency of a scenario (ensemble)The previous sections illustrated two functions to validate specific variables across their sectors (sub-categories) or regional disaggregation.These two functions are combined in the [check_internal_consistency()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_internal_consistency) feature.This feature of the **pyam** package currently only supports "consistency"in the sense of a strictly hierarchical variable tree(with subcategories summing up to the category value including components, discussed above)and that all the regions sum to the 'World' region. See [this issue](https://github.com/IAMconsortium/pyam/issues/106) for more information.If we have an internally consistent scenario ensemble (or single scenario), the function will return `None`; otherwise, it will return a concatenation of [pandas.DataFrames](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) indicating all detected inconsistencies.For this section, we use a tutorial scenario which is constructed to highlight the individual validation features below.The scenario below has two inconsistencies:1. In year `2010` and regions `region_b` & `World`, the values of coal and wind do not add up to the total `Primary Energy` value2. In year `2020` in the `World` region, the value of `Primary Energy` and `Primary Energy|Coal` is not the sum of `region_a` and `region_b` (but the sum of wind and coal to `Primary Energy` in each sub-region is correct)
###Code
tutorial_df = pyam.IamDataFrame(pd.DataFrame([
['World', 'Primary Energy', 'EJ/yr', 7, 15],
['World', 'Primary Energy|Coal', 'EJ/yr', 4, 11],
['World', 'Primary Energy|Wind', 'EJ/yr', 2, 4],
['region_a', 'Primary Energy', 'EJ/yr', 4, 8],
['region_a', 'Primary Energy|Coal', 'EJ/yr', 2, 6],
['region_a', 'Primary Energy|Wind', 'EJ/yr', 2, 2],
['region_b', 'Primary Energy', 'EJ/yr', 3, 6],
['region_b', 'Primary Energy|Coal', 'EJ/yr', 2, 4],
['region_b', 'Primary Energy|Wind', 'EJ/yr', 0, 2],
],
columns=['region', 'variable', 'unit', 2010, 2020]
), model='model_a', scenario='scen_a')
###Output
_____no_output_____
###Markdown
All checking-functions take arguments for [np.is_close()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.isclose.html) as keyword arguments. We show our recommended settings and how to use them here.
###Code
np_isclose_args = {
'equal_nan': True,
'rtol': 1e-03,
'atol': 1e-05,
}
tutorial_df.check_internal_consistency(**np_isclose_args)
###Output
_____no_output_____
###Markdown
Aggregating and downscaling timeseries dataThe **pyam** package offers many tools to facilitate processing of scenario data.In this notebook, we illustrate methods to aggregate and downscale timeseries data of an **IamDataFrame** across regions and sectors, as well as checking consistency of given data along these dimensions.In this tutorial, we show how to make the most of **pyam** to compute such aggregate timeseries data, and to check that a scenario ensemble (or just a single scenario) is complete and that timeseries data "add up" across regions and along the variable tree (i.e., that the sum of values of the subcategories such as `Primary Energy|*` are identical to the values of the category `Primary Energy`).There are two distinct use cases where these features can be used. Use case 1: compute data at higher/lower sectoral or spatial aggregationGiven scenario results at a specific (usually very detailed) sectoral and spatial resolution, **pyam** offers a suite of functions to easily compute aggregate timeseries. For example, this allows to sum up national energy demand to regional or global values,or to compute the average of a global carbon price weighted by regional emissions.These functions can be used as part of an automated workflow to generate complete scenario results from raw model outputs. Use case 2: check the consistency of data across sectoral or spatial levelsIn model comparison exercises or ensemble compilation projects, a user needs to verify the internal consistency of submitted scenario results (cf. Huppmann et al., 2018, doi: [10.1038/s41558-018-0317-4](http://rdcu.be/9i8a)).Such inconsistencies can be due to incomplete variable hierarchies, reporting templates incompatible with model specifications, or user error. OverviewThis notebook illustrates the following features:0. Load timeseries data from a snapshot file and inspect the scenario1. Aggregate timeseries over sectors (i.e., sub-categories)2. Aggregate timeseries over regions including weighted average3. Downscale timeseries given at a region level to sub-regions using a proxy variable4. Downscale timeseries using an explicit weighting dataframe5. Check the internal consistency of a scenario (ensemble)
###Code
import pandas as pd
import pyam
###Output
_____no_output_____
###Markdown
0. Load timeseries data from snapshot file and inspect the scenarioThe stylized scenario used in this tutorial has data for two regions (`reg_a` & `reg_b`) as well as the `World` aggregate, and for categories of variables: primary energy demand, emissions, carbon price, and population.
###Code
df = pyam.IamDataFrame(data='tutorial_data_aggregating_downscaling.csv')
df.region
df.variable
###Output
_____no_output_____
###Markdown
1. Aggregating timeseries across sectorsLet's first display the data for the components of primary energy demand.
###Code
df.filter(variable='Primary Energy|*').timeseries()
###Output
_____no_output_____
###Markdown
Next, we are going to use the [aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate) function to compute the total `Primary Energy` from its components (wind and coal) in each region (including `World`).The function returns an **IamDataFrame**, so we can use [timeseries()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.timeseries) to display the resulting data.
###Code
df.aggregate('Primary Energy').timeseries()
###Output
_____no_output_____
###Markdown
If we are interested in **use case 1**, we could use the argument `append=True` to directly add the computed aggregate to the **IamDataFrame** instance.However, in this tutorial, the data already includes the total primary energy demand. Therefore, we illustrate **use case 2** and apply the [check_aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_aggregate) function to verify whether a given variable is the sum of its sectoral components(i.e., `Primary Energy` should be equal to `Primary Energy|Coal` plus `Primary Energy|Wind`).The validation is performed separately for each region.The function returns `None` if the validation is correct (which it is for primary energy demand)or a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) highlighting where the aggregate does not match (this will be illustrated in the next section).
###Code
df.check_aggregate('Primary Energy')
###Output
_____no_output_____
###Markdown
The function also returns useful logging messages when there is nothing to check (because there are no sectors below `Primary Energy|Wind`).
###Code
df.check_aggregate('Primary Energy|Wind')
###Output
_____no_output_____
###Markdown
2. Aggregating timeseries across subregionsSimilarly to the previous example, we now use the [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate_region) function to compute regional aggregates.By default, this method sums all the regions in the dataframe to make a `World` region; this can be changed with the keyword arguments `region` and `subregions`.
###Code
df.aggregate_region('Primary Energy').timeseries()
###Output
_____no_output_____
###Markdown
Adding regional componentsAs a next step, we use [check_aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_aggregate_region) to verify that the regional aggregate of CO2 emissions matches the timeseries data given in the scenario.
###Code
df.check_aggregate_region('Emissions|CO2')
###Output
_____no_output_____
###Markdown
As announced above, this validation failed and we see a dataframe of the expected data at the `region` level and the aggregation computed from the `subregions`.Let's look at the entire emissions timeseries in the scenario to find out what is going on.
###Code
df.filter(variable='Emissions*').timeseries()
###Output
_____no_output_____
###Markdown
Investigating the data carefully, you will notice that emissions from the energy sector and agriculture, forestry & land use (AFOLU) are given in the subregions and the `World` region, whereas emissions from bunker fuels are only defined at the global level.This is a common issue in emissions data, where some sources (e.g., global aviation and maritime transport) cannot be attributed to one region.Luckily, the functions [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate_region)and [check_aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_aggregate_region)support this use case:by adding `components=True`, the regional aggregation will include any sub-categories of the variable that are only present at the `region` level but not in any subregion.
###Code
df.aggregate_region('Emissions|CO2', components=True).timeseries()
###Output
_____no_output_____
###Markdown
The regional aggregate now matches the data given at the `World` level in the tutorial data.Note that the components to be included at the region level can also be specified directly via a list of variables, in this case we would use `components=['Emissions|CO2|Bunkers']`. Computing a weighted average across regionsOne other frequent requirement when aggregating across regions is a weighted average.To illustrate this feature, the tutorial data includes carbon price data.Naturally, the appropriate weighting data are the regional carbon emissions.The following cells show:0. The carbon price data across the regions1. A (failing) validation that the regional aggretion (without weights) matches the reported prices at the `World` level2. The emissions-weighted average of carbon prices returned as a new **IamDataFrame**
###Code
df.filter(variable='Price|Carbon').timeseries()
df.check_aggregate_region('Price|Carbon')
df.aggregate_region('Price|Carbon', weight='Emissions|CO2').timeseries()
###Output
_____no_output_____
###Markdown
3. Downscaling timeseries data to subregions using a proxyThe inverse operation of regional aggregation is "downscaling" of timeseries data given at a regional level to a number of subregions, usually using some other data as proxy to divide and allocate the total to the subregions.This section shows an example using the [downscale_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.downscale_region) function to divide the total primary energy demand using population as a proxy.
###Code
df.filter(variable='Population').timeseries()
df.downscale_region('Primary Energy', proxy='Population').timeseries()
###Output
_____no_output_____
###Markdown
By the way, the functions[aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate), [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate_region) and[downscale_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.downscale_region)also take lists of variables as `variable` argument.See the next cell for an example.
###Code
var_list = ['Primary Energy', 'Primary Energy|Coal']
df.downscale_region(var_list, proxy='Population').timeseries()
###Output
_____no_output_____
###Markdown
4. Downscaling timeseries data to subregions using a weighting dataframeIn cases where using existing data directly as a proxy (as illustrated in the previous section) is not practical,a user can also create a weighting dataframe and pass that directly to the `downscale_region()` function.The example below uses the weighting factors implied by the population variable for easy comparison to the previous section.
###Code
weight = pd.DataFrame(
[[0.66, 0.6], [0.33, 0.4]],
index=pd.Series(['reg_a', 'reg_b'], name='region'),
columns=pd.Series([2005, 2010], name='year')
)
weight
df.downscale_region(var_list, weight=weight).timeseries()
###Output
_____no_output_____
###Markdown
5. Checking the internal consistency of a scenario (ensemble)The previous sections illustrated two functions to validate specific variables across their sectors (sub-categories) or regional disaggregation.These two functions are combined in the [check_internal_consistency()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_internal_consistency) feature.This feature of the **pyam** package currently only supports "consistency"in the sense of a strictly hierarchical variable tree(with subcategories summing up to the category value including components, discussed above)and that all the regions sum to the 'World' region. See [this issue](https://github.com/IAMconsortium/pyam/issues/106) for more information.If we have an internally consistent scenario ensemble (or single scenario), the function will return `None`; otherwise, it will return a concatenation of [pandas.DataFrames](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) indicating all detected inconsistencies.For this section, we use a tutorial scenario which is constructed to highlight the individual validation features below.The scenario below has two inconsistencies:1. In year `2010` and regions `region_b` & `World`, the values of coal and wind do not add up to the total `Primary Energy` value2. In year `2020` in the `World` region, the value of `Primary Energy` and `Primary Energy|Coal` is not the sum of `region_a` and `region_b` (but the sum of wind and coal to `Primary Energy` in each sub-region is correct)
###Code
tutorial_df = pyam.IamDataFrame(pd.DataFrame([
['World', 'Primary Energy', 'EJ/yr', 7, 15],
['World', 'Primary Energy|Coal', 'EJ/yr', 4, 11],
['World', 'Primary Energy|Wind', 'EJ/yr', 2, 4],
['region_a', 'Primary Energy', 'EJ/yr', 4, 8],
['region_a', 'Primary Energy|Coal', 'EJ/yr', 2, 6],
['region_a', 'Primary Energy|Wind', 'EJ/yr', 2, 2],
['region_b', 'Primary Energy', 'EJ/yr', 3, 6],
['region_b', 'Primary Energy|Coal', 'EJ/yr', 2, 4],
['region_b', 'Primary Energy|Wind', 'EJ/yr', 0, 2],
],
columns=['region', 'variable', 'unit', 2010, 2020]
), model='model_a', scenario='scen_a')
###Output
_____no_output_____
###Markdown
All checking-functions take arguments for [np.is_close()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.isclose.html) as keyword arguments. We show our recommended settings and how to use them here.
###Code
np_isclose_args = {
'equal_nan': True,
'rtol': 1e-03,
'atol': 1e-05,
}
tutorial_df.check_internal_consistency(**np_isclose_args)
###Output
_____no_output_____
###Markdown
Aggregating and downscaling timeseries dataThe **pyam** package offers many tools to facilitate processing of scenario data.In this notebook, we illustrate methods to aggregate and downscale timeseries data of an `IamDataFrame` across regions and sectors, as well as checking consistency of given data along these dimensions.In this tutorial, we show how to make the most of **pyam** to compute such aggregate timeseries data, and to check that a scenario ensemble (or just a single scenario) is complete and that timeseries data "add up" across regions and along the variable tree (i.e., that the sum of values of the subcategories such as `Primary Energy|*` are identical to the values of the category `Primary Energy`).There are two distinct use cases where these features can be used. Use case 1: compute data at higher/lower sectoral or spatial aggregationGiven scenario results at a specific (usually very detailed) sectoral and spatial resolution, **pyam** offers a suite of functions to easily compute aggregate timeseries. For example, this allows to sum up national energy demand to regional or global values,or to compute the average of a global carbon price weighted by regional emissions.These functions can be used as part of an automated workflow to generate complete scenario results from raw model outputs. Use case 2: check the consistency of data across sectoral or spatial levelsIn model comparison exercises or ensemble compilation projects, a user needs to verify the internal consistency of submitted scenario results (cf. Huppmann et al., 2018, doi: [10.1038/s41558-018-0317-4](http://rdcu.be/9i8a)).Such inconsistencies can be due to incomplete variable hierarchies, reporting templates incompatible with model specifications, or user error. OverviewThis notebook illustrates the following features:0. Load timeseries data from a snapshot file and inspect the scenario1. Aggregate timeseries over sectors (i.e., sub-categories)2. Aggregate timeseries over regions including weighted average3. Downscale timeseries given at a region level to sub-regions using a proxy variable4. Check the internal consistency of a scenario (ensemble)
###Code
import pandas as pd
import pyam
###Output
_____no_output_____
###Markdown
0. Load timeseries data from snapshot file and inspect the scenarioThe stylized scenario used in this tutorial has data for two regions (`reg_a` & `reg_b`) as well as the `World` aggregate, and for categories of variables: primary energy demand, emissions, carbon price, and population.
###Code
df = pyam.IamDataFrame(data='tutorial_data_aggregating_downscaling.csv')
df.regions()
df.variables()
###Output
_____no_output_____
###Markdown
1. Aggregating timeseries across sectorsLet's first display the data for the components of primary energy demand.
###Code
df.filter(variable='Primary Energy|*').timeseries()
###Output
_____no_output_____
###Markdown
Next, we are going to use the [aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate) function to compute the total `Primary Energy` from its components (wind and coal) in each region (including `World`).The function returns an `IamDataFrame`, so we can use [timeseries()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.timeseries) to display the resulting data.
###Code
df.aggregate('Primary Energy').timeseries()
###Output
_____no_output_____
###Markdown
If we are interested in **use case 1**, we could use the argument `append=True` to directly add the computed aggregate to the `IamDataFrame`.However, in this tutorial, the data already includes the total primary energy demand. Therefore, we illustrate **use case 2** and apply the [check_aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_aggregate) function to verify whether a given variable is the sum of its sectoral components(i.e., `Primary Energy` should be equal to `Primary Energy|Coal` plus `Primary Energy|Wind`).The validation is performed separately for each region.The function returns `None` if the validation is correct (which it is for primary energy demand)or a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) highlighting where the aggregate does not match (this will be illustrated in the next section).
###Code
df.check_aggregate('Primary Energy')
###Output
_____no_output_____
###Markdown
The function also returns useful logging messages when there is nothing to check (because there are no sectors below `Primary Energy|Wind`).
###Code
df.check_aggregate('Primary Energy|Wind')
###Output
_____no_output_____
###Markdown
2. Aggregating timeseries across subregionsSimilarly to the previous example, we now use the [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate_region) function to compute regional aggregates.By default, this method sums all the regions in the dataframe to make a `World` region; this can be changed with the keyword arguments `region` and `subregions`.
###Code
df.aggregate_region('Primary Energy').timeseries()
###Output
_____no_output_____
###Markdown
Adding regional componentsAs a next step, we use [check_aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_aggregate_region) to verify that the regional aggregate of CO2 emissions matches the timeseries data given in the scenario.
###Code
df.check_aggregate_region('Emissions|CO2')
###Output
_____no_output_____
###Markdown
As announced above, this validation failed and we see a dataframe of the expected data at the `region` level and the aggregation computed from the `subregions`.Let's look at the entire emissions timeseries in the scenario to find out what is going on.
###Code
df.filter(variable='Emissions*').timeseries()
###Output
_____no_output_____
###Markdown
Investigating the data carefully, you will notice that emissions from the energy sector and agriculture, forestry & land use (AFOLU) are given in the subregions and the `World` region, whereas emissions from bunker fuels are only defined at the global level.This is a common issue in emissions data, where some sources (e.g., global aviation and maritime transport) cannot be attributed to one region.Luckily, the functions [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate_region)and [check_aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_aggregate_region)support this use case:by adding `components=True`, the regional aggregation will include any sub-categories of the variable that are only present at the `region` level but not in any subregion.
###Code
df.aggregate_region('Emissions|CO2', components=True).timeseries()
###Output
_____no_output_____
###Markdown
The regional aggregate now matches the data given at the `World` level in the tutorial data.Note that the components to be included at the region level can also be specified directly via a list of variables, in this case we would use `components=['Emissions|CO2|Bunkers']`. Computing a weighted average across regionsOne other frequent requirement when aggregating across regions is a weighted average.To illustrate this feature, the tutorial data includes carbon price data.Naturally, the appropriate weighting data are the regional carbon emissions.The following cells show:0. The carbon price data across the regions1. A (failing) validation that the regional aggretion (without weights) matches the reported prices at the `World` level2. The emissions-weighted average of carbon prices returned as a new `IamDataFrame`
###Code
df.filter(variable='Price|Carbon').timeseries()
df.check_aggregate_region('Price|Carbon')
df.aggregate_region('Price|Carbon', weight='Emissions|CO2').timeseries()
###Output
_____no_output_____
###Markdown
3. Downscaling timeseries data to subregionsThe inverse operation of regional aggregation is "downscaling" of timeseries data given at a regional level to a number of subregions, usually using some other data as proxy to divide and allocate the total to the subregions.This section shows an example using the [downscale_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.downscale_region) function to divide the total primary energy demand using population as a proxy.
###Code
df.downscale_region('Primary Energy', proxy='Population').timeseries()
###Output
_____no_output_____
###Markdown
By the way, the functions[aggregate()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate), [aggregate_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.aggregate_region) and[downscale_region()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.downscale_region)also take lists of variables as `variable` argument.See the next cell for an example.
###Code
var_list = ['Primary Energy', 'Primary Energy|Coal']
df.downscale_region(var_list, proxy='Population').timeseries()
###Output
_____no_output_____
###Markdown
4. Checking the internal consistency of a scenario (ensemble)The previous sections illustrated two functions to validate specific variables across their sectors (sub-categories) or regional disaggregation.These two functions are combined in the [check_internal_consistency()](https://pyam-iamc.readthedocs.io/en/stable/api/iamdataframe.htmlpyam.IamDataFrame.check_internal_consistency) feature.This feature of the **pyam** package currently only supports "consistency"in the sense of a strictly hierarchical variable tree(with subcategories summing up to the category value including components, discussed above)and that all the regions sum to the ``World`` region. See [this issue](https://github.com/IAMconsortium/pyam/issues/106) for more information.If we have an internally consistent scenario ensemble (or single scenario), the function will return `None`; otherwise, it will return a concatenation of [pandas.DataFrames](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) indicating all detected inconsistencies.For this section, we use a tutorial scenario which is constructed to highlight the individual validation features below.The scenario below has two inconsistencies:1. In year `2010` and regions `region_b` & `World`, the values of coal and wind do not add up to the total `Primary Energy` value2. In year `2020` in the `World` region, the value of `Primary Energy` and `Primary Energy|Coal` is not the sum of `region_a` and `region_b` (but the sum of wind and coal to `Primary Energy` in each sub-region is correct)
###Code
tutorial_df = pyam.IamDataFrame(pd.DataFrame([
['World', 'Primary Energy', 'EJ/yr', 7, 15],
['World', 'Primary Energy|Coal', 'EJ/yr', 4, 11],
['World', 'Primary Energy|Wind', 'EJ/yr', 2, 4],
['region_a', 'Primary Energy', 'EJ/yr', 4, 8],
['region_a', 'Primary Energy|Coal', 'EJ/yr', 2, 6],
['region_a', 'Primary Energy|Wind', 'EJ/yr', 2, 2],
['region_b', 'Primary Energy', 'EJ/yr', 3, 6],
['region_b', 'Primary Energy|Coal', 'EJ/yr', 2, 4],
['region_b', 'Primary Energy|Wind', 'EJ/yr', 0, 2],
],
columns=['region', 'variable', 'unit', 2010, 2020]
), model='model_a', scenario='scen_a')
###Output
_____no_output_____
###Markdown
All checking-functions take arguments for [np.is_close()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.isclose.html) as keyword arguments. We show our recommended settings and how to use them here.
###Code
np_isclose_args = {
'equal_nan': True,
'rtol': 1e-03,
'atol': 1e-05,
}
tutorial_df.check_internal_consistency(**np_isclose_args)
###Output
_____no_output_____ |
examples/vision/ipynb/mirnet.ipynb | ###Markdown
Low-light image enhancement using MIRNet**Author:** [Soumik Rakshit](http://github.com/soumik12345)**Date created:** 2021/09/11**Last modified:** 2021/09/15**Description:** Implementing the MIRNet architecture for low-light image enhancement. IntroductionWith the goal of recovering high-quality image content from its degraded version, imagerestoration enjoys numerous applications, such as inphotography, security, medical imaging, and remote sensing. In this example, we implement the**MIRNet** model for low-light image enhancement, a fully-convolutional architecture thatlearns an enriched set offeatures that combines contextual information from multiple scales, whilesimultaneously preserving the high-resolution spatial details. References:- [Learning Enriched Features for Real Image Restoration and Enhancement](https://arxiv.org/abs/2003.06792)- [The Retinex Theory of Color Vision](http://www.cnbc.cmu.edu/~tai/cp_papers/E.Land_Retinex_Theory_ScientifcAmerican.pdf)- [Two deterministic half-quadratic regularization algorithms for computed imaging](https://ieeexplore.ieee.org/document/413553) Downloading LOLDatasetThe **LoL Dataset** has been created for low-light image enhancement.It provides 485 images for training and 15 for testing. Each image pair in the datasetconsists of a low-light input image and its corresponding well-exposed reference image.
###Code
import os
import cv2
import random
import numpy as np
from glob import glob
from PIL import Image, ImageOps
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
!gdown https://drive.google.com/uc?id=1DdGIJ4PZPlF2ikl8mNM9V-PdVxVLbQi6
!unzip -q lol_dataset.zip
###Output
_____no_output_____
###Markdown
Creating a TensorFlow DatasetWe use 300 image pairs from the LoL Dataset's training set for training,and we use the remaining 185 image pairs for validation.We generate random crops of size `128 x 128` from the image pairs to beused for both training and validation.
###Code
random.seed(10)
IMAGE_SIZE = 128
BATCH_SIZE = 4
MAX_TRAIN_IMAGES = 300
def read_image(image_path):
image = tf.io.read_file(image_path)
image = tf.image.decode_png(image, channels=3)
image.set_shape([None, None, 3])
image = tf.cast(image, dtype=tf.float32) / 255.0
return image
def random_crop(low_image, enhanced_image):
low_image_shape = tf.shape(low_image)[:2]
low_w = tf.random.uniform(
shape=(), maxval=low_image_shape[1] - IMAGE_SIZE + 1, dtype=tf.int32
)
low_h = tf.random.uniform(
shape=(), maxval=low_image_shape[0] - IMAGE_SIZE + 1, dtype=tf.int32
)
enhanced_w = low_w
enhanced_h = low_h
low_image_cropped = low_image[
low_h : low_h + IMAGE_SIZE, low_w : low_w + IMAGE_SIZE
]
enhanced_image_cropped = enhanced_image[
enhanced_h : enhanced_h + IMAGE_SIZE, enhanced_w : enhanced_w + IMAGE_SIZE
]
return low_image_cropped, enhanced_image_cropped
def load_data(low_light_image_path, enhanced_image_path):
low_light_image = read_image(low_light_image_path)
enhanced_image = read_image(enhanced_image_path)
low_light_image, enhanced_image = random_crop(low_light_image, enhanced_image)
return low_light_image, enhanced_image
def get_dataset(low_light_images, enhanced_images):
dataset = tf.data.Dataset.from_tensor_slices((low_light_images, enhanced_images))
dataset = dataset.map(load_data, num_parallel_calls=tf.data.AUTOTUNE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
return dataset
train_low_light_images = sorted(glob("./lol_dataset/our485/low/*"))[:MAX_TRAIN_IMAGES]
train_enhanced_images = sorted(glob("./lol_dataset/our485/high/*"))[:MAX_TRAIN_IMAGES]
val_low_light_images = sorted(glob("./lol_dataset/our485/low/*"))[MAX_TRAIN_IMAGES:]
val_enhanced_images = sorted(glob("./lol_dataset/our485/high/*"))[MAX_TRAIN_IMAGES:]
test_low_light_images = sorted(glob("./lol_dataset/eval15/low/*"))
test_enhanced_images = sorted(glob("./lol_dataset/eval15/high/*"))
train_dataset = get_dataset(train_low_light_images, train_enhanced_images)
val_dataset = get_dataset(val_low_light_images, val_enhanced_images)
print("Train Dataset:", train_dataset)
print("Val Dataset:", val_dataset)
###Output
_____no_output_____
###Markdown
MIRNet ModelHere are the main features of the MIRNet model:- A feature extraction model that computes a complementary set of features across multiplespatial scales, while maintaining the original high-resolution features to preserveprecise spatial details.- A regularly repeated mechanism for information exchange, where the features acrossmulti-resolution branches are progressively fused together for improved representationlearning.- A new approach to fuse multi-scale features using a selective kernel networkthat dynamically combines variable receptive fields and faithfully preservesthe original feature information at each spatial resolution.- A recursive residual design that progressively breaks down the input signalin order to simplify the overall learning process, and allows the constructionof very deep networks.![](https://raw.githubusercontent.com/soumik12345/MIRNet/master/assets/mirnet_architecture.png) Selective Kernel Feature FusionThe Selective Kernel Feature Fusion or SKFF module performs dynamic adjustment ofreceptive fields via two operations: **Fuse** and **Select**. The Fuse operator generatesglobal feature descriptors by combining the information from multi-resolution streams.The Select operator uses these descriptors to recalibrate the feature maps (of differentstreams) followed by their aggregation.**Fuse**: The SKFF receives inputs from three parallel convolution streams carryingdifferent scales of information. We first combine these multi-scale features using anelement-wise sum, on which we apply Global Average Pooling (GAP) across the spatialdimension. Next, we apply a channel- downscaling convolution layer to generate a compactfeature representation which passes through three parallel channel-upscaling convolutionlayers (one for each resolution stream) and provides us with three feature descriptors.**Select**: This operator applies the softmax function to the feature descriptors toobtain the corresponding activations that are used to adaptively recalibrate multi-scalefeature maps. The aggregated features are defined as the sum of product of the correspondingmulti-scale feature and the feature descriptor.![](https://i.imgur.com/7U6ixF6.png)
###Code
def selective_kernel_feature_fusion(
multi_scale_feature_1, multi_scale_feature_2, multi_scale_feature_3
):
channels = list(multi_scale_feature_1.shape)[-1]
combined_feature = layers.Add()(
[multi_scale_feature_1, multi_scale_feature_2, multi_scale_feature_3]
)
gap = layers.GlobalAveragePooling2D()(combined_feature)
channel_wise_statistics = tf.reshape(gap, shape=(-1, 1, 1, channels))
compact_feature_representation = layers.Conv2D(
filters=channels // 8, kernel_size=(1, 1), activation="relu"
)(channel_wise_statistics)
feature_descriptor_1 = layers.Conv2D(
channels, kernel_size=(1, 1), activation="softmax"
)(compact_feature_representation)
feature_descriptor_2 = layers.Conv2D(
channels, kernel_size=(1, 1), activation="softmax"
)(compact_feature_representation)
feature_descriptor_3 = layers.Conv2D(
channels, kernel_size=(1, 1), activation="softmax"
)(compact_feature_representation)
feature_1 = multi_scale_feature_1 * feature_descriptor_1
feature_2 = multi_scale_feature_2 * feature_descriptor_2
feature_3 = multi_scale_feature_3 * feature_descriptor_3
aggregated_feature = layers.Add()([feature_1, feature_2, feature_3])
return aggregated_feature
###Output
_____no_output_____
###Markdown
Dual Attention UnitThe Dual Attention Unit or DAU is used to extract features in the convolutional streams.While the SKFF block fuses information across multi-resolution branches, we also need amechanism to share information within a feature tensor, both along the spatial and thechannel dimensions which is done by the DAU block. The DAU suppresses less usefulfeatures and only allows more informative ones to pass further. This featurerecalibration is achieved by using **Channel Attention** and **Spatial Attention**mechanisms.The **Channel Attention** branch exploits the inter-channel relationships of theconvolutional feature maps by applying squeeze and excitation operations. Given a featuremap, the squeeze operation applies Global Average Pooling across spatial dimensions toencode global context, thus yielding a feature descriptor. The excitation operator passesthis feature descriptor through two convolutional layers followed by the sigmoid gatingand generates activations. Finally, the output of Channel Attention branch is obtained byrescaling the input feature map with the output activations.The **Spatial Attention** branch is designed to exploit the inter-spatial dependencies ofconvolutional features. The goal of Spatial Attention is to generate a spatial attentionmap and use it to recalibrate the incoming features. To generate the spatial attentionmap, the Spatial Attention branch first independently applies Global Average Pooling andMax Pooling operations on input features along the channel dimensions and concatenatesthe outputs to form a resultant feature map which is then passed through a convolutionand sigmoid activation to obtain the spatial attention map. This spatial attention map isthen used to rescale the input feature map.![](https://i.imgur.com/Dl0IwQs.png)
###Code
def spatial_attention_block(input_tensor):
average_pooling = tf.reduce_max(input_tensor, axis=-1)
average_pooling = tf.expand_dims(average_pooling, axis=-1)
max_pooling = tf.reduce_mean(input_tensor, axis=-1)
max_pooling = tf.expand_dims(max_pooling, axis=-1)
concatenated = layers.Concatenate(axis=-1)([average_pooling, max_pooling])
feature_map = layers.Conv2D(1, kernel_size=(1, 1))(concatenated)
feature_map = tf.nn.sigmoid(feature_map)
return input_tensor * feature_map
def channel_attention_block(input_tensor):
channels = list(input_tensor.shape)[-1]
average_pooling = layers.GlobalAveragePooling2D()(input_tensor)
feature_descriptor = tf.reshape(average_pooling, shape=(-1, 1, 1, channels))
feature_activations = layers.Conv2D(
filters=channels // 8, kernel_size=(1, 1), activation="relu"
)(feature_descriptor)
feature_activations = layers.Conv2D(
filters=channels, kernel_size=(1, 1), activation="sigmoid"
)(feature_activations)
return input_tensor * feature_activations
def dual_attention_unit_block(input_tensor):
channels = list(input_tensor.shape)[-1]
feature_map = layers.Conv2D(
channels, kernel_size=(3, 3), padding="same", activation="relu"
)(input_tensor)
feature_map = layers.Conv2D(channels, kernel_size=(3, 3), padding="same")(
feature_map
)
channel_attention = channel_attention_block(feature_map)
spatial_attention = spatial_attention_block(feature_map)
concatenation = layers.Concatenate(axis=-1)([channel_attention, spatial_attention])
concatenation = layers.Conv2D(channels, kernel_size=(1, 1))(concatenation)
return layers.Add()([input_tensor, concatenation])
###Output
_____no_output_____
###Markdown
Multi-Scale Residual BlockThe Multi-Scale Residual Block is capable of generating a spatially-precise output bymaintaining high-resolution representations, while receiving rich contextual informationfrom low-resolutions. The MRB consists of multiple (three in this paper)fully-convolutional streams connected in parallel. It allows information exchange acrossparallel streams in order to consolidate the high-resolution features with the help oflow-resolution features, and vice versa. The MIRNet employs a recursive residual design(with skip connections) to ease the flow of information during the learning process. Inorder to maintain the residual nature of our architecture, residual resizing modules areused to perform downsampling and upsampling operations that are used in the Multi-scaleResidual Block.![](https://i.imgur.com/wzZKV57.png)
###Code
# Recursive Residual Modules
def down_sampling_module(input_tensor):
channels = list(input_tensor.shape)[-1]
main_branch = layers.Conv2D(channels, kernel_size=(1, 1), activation="relu")(
input_tensor
)
main_branch = layers.Conv2D(
channels, kernel_size=(3, 3), padding="same", activation="relu"
)(main_branch)
main_branch = layers.MaxPooling2D()(main_branch)
main_branch = layers.Conv2D(channels * 2, kernel_size=(1, 1))(main_branch)
skip_branch = layers.MaxPooling2D()(input_tensor)
skip_branch = layers.Conv2D(channels * 2, kernel_size=(1, 1))(skip_branch)
return layers.Add()([skip_branch, main_branch])
def up_sampling_module(input_tensor):
channels = list(input_tensor.shape)[-1]
main_branch = layers.Conv2D(channels, kernel_size=(1, 1), activation="relu")(
input_tensor
)
main_branch = layers.Conv2D(
channels, kernel_size=(3, 3), padding="same", activation="relu"
)(main_branch)
main_branch = layers.UpSampling2D()(main_branch)
main_branch = layers.Conv2D(channels // 2, kernel_size=(1, 1))(main_branch)
skip_branch = layers.UpSampling2D()(input_tensor)
skip_branch = layers.Conv2D(channels // 2, kernel_size=(1, 1))(skip_branch)
return layers.Add()([skip_branch, main_branch])
# MRB Block
def multi_scale_residual_block(input_tensor, channels):
# features
level1 = input_tensor
level2 = down_sampling_module(input_tensor)
level3 = down_sampling_module(level2)
# DAU
level1_dau = dual_attention_unit_block(level1)
level2_dau = dual_attention_unit_block(level2)
level3_dau = dual_attention_unit_block(level3)
# SKFF
level1_skff = selective_kernel_feature_fusion(
level1_dau,
up_sampling_module(level2_dau),
up_sampling_module(up_sampling_module(level3_dau)),
)
level2_skff = selective_kernel_feature_fusion(
down_sampling_module(level1_dau), level2_dau, up_sampling_module(level3_dau)
)
level3_skff = selective_kernel_feature_fusion(
down_sampling_module(down_sampling_module(level1_dau)),
down_sampling_module(level2_dau),
level3_dau,
)
# DAU 2
level1_dau_2 = dual_attention_unit_block(level1_skff)
level2_dau_2 = up_sampling_module((dual_attention_unit_block(level2_skff)))
level3_dau_2 = up_sampling_module(
up_sampling_module(dual_attention_unit_block(level3_skff))
)
# SKFF 2
skff_ = selective_kernel_feature_fusion(level1_dau_2, level3_dau_2, level3_dau_2)
conv = layers.Conv2D(channels, kernel_size=(3, 3), padding="same")(skff_)
return layers.Add()([input_tensor, conv])
###Output
_____no_output_____
###Markdown
MIRNet Model
###Code
def recursive_residual_group(input_tensor, num_mrb, channels):
conv1 = layers.Conv2D(channels, kernel_size=(3, 3), padding="same")(input_tensor)
for _ in range(num_mrb):
conv1 = multi_scale_residual_block(conv1, channels)
conv2 = layers.Conv2D(channels, kernel_size=(3, 3), padding="same")(conv1)
return layers.Add()([conv2, input_tensor])
def mirnet_model(num_rrg, num_mrb, channels):
input_tensor = keras.Input(shape=[None, None, 3])
x1 = layers.Conv2D(channels, kernel_size=(3, 3), padding="same")(input_tensor)
for _ in range(num_rrg):
x1 = recursive_residual_group(x1, num_mrb, channels)
conv = layers.Conv2D(3, kernel_size=(3, 3), padding="same")(x1)
output_tensor = layers.Add()([input_tensor, conv])
return keras.Model(input_tensor, output_tensor)
model = mirnet_model(num_rrg=3, num_mrb=2, channels=64)
###Output
_____no_output_____
###Markdown
Training- We train MIRNet using **Charbonnier Loss** as the loss function and **AdamOptimizer** with a learning rate of `1e-4`.- We use **Peak Signal Noise Ratio** or PSNR as a metric which is an expression for theratio between the maximum possible value (power) of a signal and the power of distortingnoise that affects the quality of its representation.
###Code
def charbonnier_loss(y_true, y_pred):
return tf.reduce_mean(tf.sqrt(tf.square(y_true - y_pred) + tf.square(1e-3)))
def peak_signal_noise_ratio(y_true, y_pred):
return tf.image.psnr(y_pred, y_true, max_val=255.0)
optimizer = keras.optimizers.Adam(learning_rate=1e-4)
model.compile(
optimizer=optimizer, loss=charbonnier_loss, metrics=[peak_signal_noise_ratio]
)
history = model.fit(
train_dataset,
validation_data=val_dataset,
epochs=50,
callbacks=[
keras.callbacks.ReduceLROnPlateau(
monitor="val_peak_signal_noise_ratio",
factor=0.5,
patience=5,
verbose=1,
min_delta=1e-7,
mode="max",
)
],
)
plt.plot(history.history["loss"], label="train_loss")
plt.plot(history.history["val_loss"], label="val_loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.title("Train and Validation Losses Over Epochs", fontsize=14)
plt.legend()
plt.grid()
plt.show()
plt.plot(history.history["peak_signal_noise_ratio"], label="train_psnr")
plt.plot(history.history["val_peak_signal_noise_ratio"], label="val_psnr")
plt.xlabel("Epochs")
plt.ylabel("PSNR")
plt.title("Train and Validation PSNR Over Epochs", fontsize=14)
plt.legend()
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Inference
###Code
def plot_results(images, titles, figure_size=(12, 12)):
fig = plt.figure(figsize=figure_size)
for i in range(len(images)):
fig.add_subplot(1, len(images), i + 1).set_title(titles[i])
_ = plt.imshow(images[i])
plt.axis("off")
plt.show()
def infer(original_image):
image = keras.preprocessing.image.img_to_array(original_image)
image = image.astype("float32") / 255.0
image = np.expand_dims(image, axis=0)
output = model.predict(image)
output_image = output[0] * 255.0
output_image = output_image.clip(0, 255)
output_image = output_image.reshape(
(np.shape(output_image)[0], np.shape(output_image)[1], 3)
)
output_image = Image.fromarray(np.uint8(output_image))
original_image = Image.fromarray(np.uint8(original_image))
return output_image
###Output
_____no_output_____
###Markdown
Inference on Test ImagesWe compare the test images from LOLDataset enhanced by MIRNet with imagesenhanced via the `PIL.ImageOps.autocontrast()` function.You can use the trained model hosted on [Hugging Face Hub](https://huggingface.co/keras-io/lowlight-enhance-mirnet) and try the demo on [Hugging Face Spaces](https://huggingface.co/spaces/keras-io/Enhance_Low_Light_Image).
###Code
for low_light_image in random.sample(test_low_light_images, 6):
original_image = Image.open(low_light_image)
enhanced_image = infer(original_image)
plot_results(
[original_image, ImageOps.autocontrast(original_image), enhanced_image],
["Original", "PIL Autocontrast", "MIRNet Enhanced"],
(20, 12),
)
###Output
_____no_output_____
###Markdown
Low-light image enhancement using MIRNet**Author:** [Soumik Rakshit](http://github.com/soumik12345)**Date created:** 2021/09/11**Last modified:** 2021/09/15**Description:** Implementing the MIRNet architecture for low-light image enhancement. IntroductionWith the goal of recovering high-quality image content from its degraded version, imagerestoration enjoys numerous applications, such as inphotography, security, medical imaging, and remote sensing. In this example, we implement the**MIRNet** model for low-light image enhancement, a fully-convolutional architecture thatlearns an enriched set offeatures that combines contextual information from multiple scales, whilesimultaneously preserving the high-resolution spatial details. References:- [Learning Enriched Features for Real Image Restoration and Enhancement](https://arxiv.org/abs/2003.06792)- [The Retinex Theory of Color Vision](http://www.cnbc.cmu.edu/~tai/cp_papers/E.Land_Retinex_Theory_ScientifcAmerican.pdf)- [Two deterministic half-quadratic regularization algorithms for computed imaging](https://ieeexplore.ieee.org/document/413553) Downloading LOLDatasetThe **LoL Dataset** has been created for low-light image enhancement.It provides 485 images for training and 15 for testing. Each image pair in the datasetconsists of a low-light input image and its corresponding well-exposed reference image.
###Code
import os
import cv2
import random
import numpy as np
from glob import glob
from PIL import Image, ImageOps
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
!gdown https://drive.google.com/uc?id=1DdGIJ4PZPlF2ikl8mNM9V-PdVxVLbQi6
!unzip -q lol_dataset.zip
###Output
_____no_output_____
###Markdown
Creating a TensorFlow DatasetWe use 300 image pairs from the LoL Dataset's training set for training,and we use the remaining 185 image pairs for validation.We generate random crops of size `128 x 128` from the image pairs to beused for both training and validation.
###Code
random.seed(10)
IMAGE_SIZE = 128
BATCH_SIZE = 4
MAX_TRAIN_IMAGES = 300
def read_image(image_path):
image = tf.io.read_file(image_path)
image = tf.image.decode_png(image, channels=3)
image.set_shape([None, None, 3])
image = tf.cast(image, dtype=tf.float32) / 255.0
return image
def random_crop(low_image, enhanced_image):
low_image_shape = tf.shape(low_image)[:2]
low_w = tf.random.uniform(
shape=(), maxval=low_image_shape[1] - IMAGE_SIZE + 1, dtype=tf.int32
)
low_h = tf.random.uniform(
shape=(), maxval=low_image_shape[0] - IMAGE_SIZE + 1, dtype=tf.int32
)
enhanced_w = low_w
enhanced_h = low_h
low_image_cropped = low_image[
low_h : low_h + IMAGE_SIZE, low_w : low_w + IMAGE_SIZE
]
enhanced_image_cropped = enhanced_image[
enhanced_h : enhanced_h + IMAGE_SIZE, enhanced_w : enhanced_w + IMAGE_SIZE
]
return low_image_cropped, enhanced_image_cropped
def load_data(low_light_image_path, enhanced_image_path):
low_light_image = read_image(low_light_image_path)
enhanced_image = read_image(enhanced_image_path)
low_light_image, enhanced_image = random_crop(low_light_image, enhanced_image)
return low_light_image, enhanced_image
def get_dataset(low_light_images, enhanced_images):
dataset = tf.data.Dataset.from_tensor_slices((low_light_images, enhanced_images))
dataset = dataset.map(load_data, num_parallel_calls=tf.data.AUTOTUNE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
return dataset
train_low_light_images = sorted(glob("./lol_dataset/our485/low/*"))[:MAX_TRAIN_IMAGES]
train_enhanced_images = sorted(glob("./lol_dataset/our485/high/*"))[:MAX_TRAIN_IMAGES]
val_low_light_images = sorted(glob("./lol_dataset/our485/low/*"))[MAX_TRAIN_IMAGES:]
val_enhanced_images = sorted(glob("./lol_dataset/our485/high/*"))[MAX_TRAIN_IMAGES:]
test_low_light_images = sorted(glob("./lol_dataset/eval15/low/*"))
test_enhanced_images = sorted(glob("./lol_dataset/eval15/high/*"))
train_dataset = get_dataset(train_low_light_images, train_enhanced_images)
val_dataset = get_dataset(val_low_light_images, val_enhanced_images)
print("Train Dataset:", train_dataset)
print("Val Dataset:", val_dataset)
###Output
_____no_output_____
###Markdown
MIRNet ModelHere are the main features of the MIRNet model:- A feature extraction model that computes a complementary set of features across multiplespatial scales, while maintaining the original high-resolution features to preserveprecise spatial details.- A regularly repeated mechanism for information exchange, where the features acrossmulti-resolution branches are progressively fused together for improved representationlearning.- A new approach to fuse multi-scale features using a selective kernel networkthat dynamically combines variable receptive fields and faithfully preservesthe original feature information at each spatial resolution.- A recursive residual design that progressively breaks down the input signalin order to simplify the overall learning process, and allows the constructionof very deep networks.![](https://raw.githubusercontent.com/soumik12345/MIRNet/master/assets/mirnet_architecture.png) Selective Kernel Feature FusionThe Selective Kernel Feature Fusion or SKFF module performs dynamic adjustment ofreceptive fields via two operations: **Fuse** and **Select**. The Fuse operator generatesglobal feature descriptors by combining the information from multi-resolution streams.The Select operator uses these descriptors to recalibrate the feature maps (of differentstreams) followed by their aggregation.**Fuse**: The SKFF receives inputs from three parallel convolution streams carryingdifferent scales of information. We first combine these multi-scale features using anelement-wise sum, on which we apply Global Average Pooling (GAP) across the spatialdimension. Next, we apply a channel- downscaling convolution layer to generate a compactfeature representation which passes through three parallel channel-upscaling convolutionlayers (one for each resolution stream) and provides us with three feature descriptors.**Select**: This operator applies the softmax function to the feature descriptors toobtain the corresponding activations that are used to adaptively recalibrate multi-scalefeature maps. The aggregated features are defined as the sum of product of the correspondingmulti-scale feature and the feature descriptor.![](https://i.imgur.com/7U6ixF6.png)
###Code
def selective_kernel_feature_fusion(
multi_scale_feature_1, multi_scale_feature_2, multi_scale_feature_3
):
channels = list(multi_scale_feature_1.shape)[-1]
combined_feature = layers.Add()(
[multi_scale_feature_1, multi_scale_feature_2, multi_scale_feature_3]
)
gap = layers.GlobalAveragePooling2D()(combined_feature)
channel_wise_statistics = tf.reshape(gap, shape=(-1, 1, 1, channels))
compact_feature_representation = layers.Conv2D(
filters=channels // 8, kernel_size=(1, 1), activation="relu"
)(channel_wise_statistics)
feature_descriptor_1 = layers.Conv2D(
channels, kernel_size=(1, 1), activation="softmax"
)(compact_feature_representation)
feature_descriptor_2 = layers.Conv2D(
channels, kernel_size=(1, 1), activation="softmax"
)(compact_feature_representation)
feature_descriptor_3 = layers.Conv2D(
channels, kernel_size=(1, 1), activation="softmax"
)(compact_feature_representation)
feature_1 = multi_scale_feature_1 * feature_descriptor_1
feature_2 = multi_scale_feature_2 * feature_descriptor_2
feature_3 = multi_scale_feature_3 * feature_descriptor_3
aggregated_feature = layers.Add()([feature_1, feature_2, feature_3])
return aggregated_feature
###Output
_____no_output_____
###Markdown
Dual Attention UnitThe Dual Attention Unit or DAU is used to extract features in the convolutional streams.While the SKFF block fuses information across multi-resolution branches, we also need amechanism to share information within a feature tensor, both along the spatial and thechannel dimensions which is done by the DAU block. The DAU suppresses less usefulfeatures and only allows more informative ones to pass further. This featurerecalibration is achieved by using **Channel Attention** and **Spatial Attention**mechanisms.The **Channel Attention** branch exploits the inter-channel relationships of theconvolutional feature maps by applying squeeze and excitation operations. Given a featuremap, the squeeze operation applies Global Average Pooling across spatial dimensions toencode global context, thus yielding a feature descriptor. The excitation operator passesthis feature descriptor through two convolutional layers followed by the sigmoid gatingand generates activations. Finally, the output of Channel Attention branch is obtained byrescaling the input feature map with the output activations.The **Spatial Attention** branch is designed to exploit the inter-spatial dependencies ofconvolutional features. The goal of Spatial Attention is to generate a spatial attentionmap and use it to recalibrate the incoming features. To generate the spatial attentionmap, the Spatial Attention branch first independently applies Global Average Pooling andMax Pooling operations on input features along the channel dimensions and concatenatesthe outputs to form a resultant feature map which is then passed through a convolutionand sigmoid activation to obtain the spatial attention map. This spatial attention map isthen used to rescale the input feature map.![](https://i.imgur.com/Dl0IwQs.png)
###Code
def spatial_attention_block(input_tensor):
average_pooling = tf.reduce_max(input_tensor, axis=-1)
average_pooling = tf.expand_dims(average_pooling, axis=-1)
max_pooling = tf.reduce_mean(input_tensor, axis=-1)
max_pooling = tf.expand_dims(max_pooling, axis=-1)
concatenated = layers.Concatenate(axis=-1)([average_pooling, max_pooling])
feature_map = layers.Conv2D(1, kernel_size=(1, 1))(concatenated)
feature_map = tf.nn.sigmoid(feature_map)
return input_tensor * feature_map
def channel_attention_block(input_tensor):
channels = list(input_tensor.shape)[-1]
average_pooling = layers.GlobalAveragePooling2D()(input_tensor)
feature_descriptor = tf.reshape(average_pooling, shape=(-1, 1, 1, channels))
feature_activations = layers.Conv2D(
filters=channels // 8, kernel_size=(1, 1), activation="relu"
)(feature_descriptor)
feature_activations = layers.Conv2D(
filters=channels, kernel_size=(1, 1), activation="sigmoid"
)(feature_activations)
return input_tensor * feature_activations
def dual_attention_unit_block(input_tensor):
channels = list(input_tensor.shape)[-1]
feature_map = layers.Conv2D(
channels, kernel_size=(3, 3), padding="same", activation="relu"
)(input_tensor)
feature_map = layers.Conv2D(channels, kernel_size=(3, 3), padding="same")(
feature_map
)
channel_attention = channel_attention_block(feature_map)
spatial_attention = spatial_attention_block(feature_map)
concatenation = layers.Concatenate(axis=-1)([channel_attention, spatial_attention])
concatenation = layers.Conv2D(channels, kernel_size=(1, 1))(concatenation)
return layers.Add()([input_tensor, concatenation])
###Output
_____no_output_____
###Markdown
Multi-Scale Residual BlockThe Multi-Scale Residual Block is capable of generating a spatially-precise output bymaintaining high-resolution representations, while receiving rich contextual informationfrom low-resolutions. The MRB consists of multiple (three in this paper)fully-convolutional streams connected in parallel. It allows information exchange acrossparallel streams in order to consolidate the high-resolution features with the help oflow-resolution features, and vice versa. The MIRNet employs a recursive residual design(with skip connections) to ease the flow of information during the learning process. Inorder to maintain the residual nature of our architecture, residual resizing modules areused to perform downsampling and upsampling operations that are used in the Multi-scaleResidual Block.![](https://i.imgur.com/wzZKV57.png)
###Code
# Recursive Residual Modules
def down_sampling_module(input_tensor):
channels = list(input_tensor.shape)[-1]
main_branch = layers.Conv2D(channels, kernel_size=(1, 1), activation="relu")(
input_tensor
)
main_branch = layers.Conv2D(
channels, kernel_size=(3, 3), padding="same", activation="relu"
)(main_branch)
main_branch = layers.MaxPooling2D()(main_branch)
main_branch = layers.Conv2D(channels * 2, kernel_size=(1, 1))(main_branch)
skip_branch = layers.MaxPooling2D()(input_tensor)
skip_branch = layers.Conv2D(channels * 2, kernel_size=(1, 1))(skip_branch)
return layers.Add()([skip_branch, main_branch])
def up_sampling_module(input_tensor):
channels = list(input_tensor.shape)[-1]
main_branch = layers.Conv2D(channels, kernel_size=(1, 1), activation="relu")(
input_tensor
)
main_branch = layers.Conv2D(
channels, kernel_size=(3, 3), padding="same", activation="relu"
)(main_branch)
main_branch = layers.UpSampling2D()(main_branch)
main_branch = layers.Conv2D(channels // 2, kernel_size=(1, 1))(main_branch)
skip_branch = layers.UpSampling2D()(input_tensor)
skip_branch = layers.Conv2D(channels // 2, kernel_size=(1, 1))(skip_branch)
return layers.Add()([skip_branch, main_branch])
# MRB Block
def multi_scale_residual_block(input_tensor, channels):
# features
level1 = input_tensor
level2 = down_sampling_module(input_tensor)
level3 = down_sampling_module(level2)
# DAU
level1_dau = dual_attention_unit_block(level1)
level2_dau = dual_attention_unit_block(level2)
level3_dau = dual_attention_unit_block(level3)
# SKFF
level1_skff = selective_kernel_feature_fusion(
level1_dau,
up_sampling_module(level2_dau),
up_sampling_module(up_sampling_module(level3_dau)),
)
level2_skff = selective_kernel_feature_fusion(
down_sampling_module(level1_dau), level2_dau, up_sampling_module(level3_dau)
)
level3_skff = selective_kernel_feature_fusion(
down_sampling_module(down_sampling_module(level1_dau)),
down_sampling_module(level2_dau),
level3_dau,
)
# DAU 2
level1_dau_2 = dual_attention_unit_block(level1_skff)
level2_dau_2 = up_sampling_module((dual_attention_unit_block(level2_skff)))
level3_dau_2 = up_sampling_module(
up_sampling_module(dual_attention_unit_block(level3_skff))
)
# SKFF 2
skff_ = selective_kernel_feature_fusion(level1_dau_2, level3_dau_2, level3_dau_2)
conv = layers.Conv2D(channels, kernel_size=(3, 3), padding="same")(skff_)
return layers.Add()([input_tensor, conv])
###Output
_____no_output_____
###Markdown
MIRNet Model
###Code
def recursive_residual_group(input_tensor, num_mrb, channels):
conv1 = layers.Conv2D(channels, kernel_size=(3, 3), padding="same")(input_tensor)
for _ in range(num_mrb):
conv1 = multi_scale_residual_block(conv1, channels)
conv2 = layers.Conv2D(channels, kernel_size=(3, 3), padding="same")(conv1)
return layers.Add()([conv2, input_tensor])
def mirnet_model(num_rrg, num_mrb, channels):
input_tensor = keras.Input(shape=[None, None, 3])
x1 = layers.Conv2D(channels, kernel_size=(3, 3), padding="same")(input_tensor)
for _ in range(num_rrg):
x1 = recursive_residual_group(x1, num_mrb, channels)
conv = layers.Conv2D(3, kernel_size=(3, 3), padding="same")(x1)
output_tensor = layers.Add()([input_tensor, conv])
return keras.Model(input_tensor, output_tensor)
model = mirnet_model(num_rrg=3, num_mrb=2, channels=64)
###Output
_____no_output_____
###Markdown
Training- We train MIRNet using **Charbonnier Loss** as the loss function and **AdamOptimizer** with a learning rate of `1e-4`.- We use **Peak Signal Noise Ratio** or PSNR as a metric which is an expression for theratio between the maximum possible value (power) of a signal and the power of distortingnoise that affects the quality of its representation.
###Code
def charbonnier_loss(y_true, y_pred):
return tf.reduce_mean(tf.sqrt(tf.square(y_true - y_pred) + tf.square(1e-3)))
def peak_signal_noise_ratio(y_true, y_pred):
return tf.image.psnr(y_pred, y_true, max_val=255.0)
optimizer = keras.optimizers.Adam(learning_rate=1e-4)
model.compile(
optimizer=optimizer, loss=charbonnier_loss, metrics=[peak_signal_noise_ratio]
)
history = model.fit(
train_dataset,
validation_data=val_dataset,
epochs=50,
callbacks=[
keras.callbacks.ReduceLROnPlateau(
monitor="val_peak_signal_noise_ratio",
factor=0.5,
patience=5,
verbose=1,
min_delta=1e-7,
mode="max",
)
],
)
plt.plot(history.history["loss"], label="train_loss")
plt.plot(history.history["val_loss"], label="val_loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.title("Train and Validation Losses Over Epochs", fontsize=14)
plt.legend()
plt.grid()
plt.show()
plt.plot(history.history["peak_signal_noise_ratio"], label="train_psnr")
plt.plot(history.history["val_peak_signal_noise_ratio"], label="val_psnr")
plt.xlabel("Epochs")
plt.ylabel("PSNR")
plt.title("Train and Validation PSNR Over Epochs", fontsize=14)
plt.legend()
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Inference
###Code
def plot_results(images, titles, figure_size=(12, 12)):
fig = plt.figure(figsize=figure_size)
for i in range(len(images)):
fig.add_subplot(1, len(images), i + 1).set_title(titles[i])
_ = plt.imshow(images[i])
plt.axis("off")
plt.show()
def infer(original_image):
image = keras.preprocessing.image.img_to_array(original_image)
image = image.astype("float32") / 255.0
image = np.expand_dims(image, axis=0)
output = model.predict(image)
output_image = output[0] * 255.0
output_image = output_image.clip(0, 255)
output_image = output_image.reshape(
(np.shape(output_image)[0], np.shape(output_image)[1], 3)
)
output_image = Image.fromarray(np.uint8(output_image))
original_image = Image.fromarray(np.uint8(original_image))
return output_image
###Output
_____no_output_____
###Markdown
Inference on Test ImagesWe compare the test images from LOLDataset enhanced by MIRNet with imagesenhanced via the `PIL.ImageOps.autocontrast()` function.
###Code
for low_light_image in random.sample(test_low_light_images, 6):
original_image = Image.open(low_light_image)
enhanced_image = infer(original_image)
plot_results(
[original_image, ImageOps.autocontrast(original_image), enhanced_image],
["Original", "PIL Autocontrast", "MIRNet Enhanced"],
(20, 12),
)
###Output
_____no_output_____ |
04_ingest/archive/glue-etl/continuous-nyc-taxi-dataset/AIM357-TestingForecastResults.ipynb | ###Markdown
Examine notebook used to visualize results First we will load the endpoint name, training time, prediction length and seom of the data
###Code
%store -r
print('endpoint name ', endpoint_name)
print('end training', end_training)
print('prediction_length', prediction_length)
###Output
endpoint name DeepAR-forecast-taxidata-2019-12-30-18-32-45-715
end training 2019-05-06 00:00:00
prediction_length 14
###Markdown
Sample data being used:
###Code
print('data sample')
ABB.head(5)
###Output
data sample
###Markdown
This next cell creates the predictor using the endpoint_name. Ideally we'd have the DeepARPredictor in a seperate .py rather than repeated in the two notebooks.
###Code
import sagemaker
from sagemaker import get_execution_role
from sagemaker.tuner import HyperparameterTuner
import numpy as np
import json
import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
class DeepARPredictor(sagemaker.predictor.RealTimePredictor):
def __init__(self, *args, **kwargs):
super().__init__(*args, content_type=sagemaker.content_types.CONTENT_TYPE_JSON, **kwargs)
def predict(self, ts, cat=None, dynamic_feat=None,
num_samples=100, return_samples=False, quantiles=["0.1", "0.5", "0.9"]):
"""Requests the prediction of for the time series listed in `ts`, each with the (optional)
corresponding category listed in `cat`.
ts -- `pandas.Series` object, the time series to predict
cat -- integer, the group associated to the time series (default: None)
num_samples -- integer, number of samples to compute at prediction time (default: 100)
return_samples -- boolean indicating whether to include samples in the response (default: False)
quantiles -- list of strings specifying the quantiles to compute (default: ["0.1", "0.5", "0.9"])
Return value: list of `pandas.DataFrame` objects, each containing the predictions
"""
prediction_time = ts.index[-1] + 1
quantiles = [str(q) for q in quantiles]
req = self.__encode_request(ts, cat, dynamic_feat, num_samples, return_samples, quantiles)
res = super(DeepARPredictor, self).predict(req)
return self.__decode_response(res, ts.index.freq, prediction_time, return_samples)
def __encode_request(self, ts, cat, dynamic_feat, num_samples, return_samples, quantiles):
instance = series_to_dict(ts, cat if cat is not None else None, dynamic_feat if dynamic_feat else None)
configuration = {
"num_samples": num_samples,
"output_types": ["quantiles", "samples"] if return_samples else ["quantiles"],
"quantiles": quantiles
}
http_request_data = {
"instances": [instance],
"configuration": configuration
}
return json.dumps(http_request_data).encode('utf-8')
def __decode_response(self, response, freq, prediction_time, return_samples):
# we only sent one time series so we only receive one in return
# however, if possible one will pass multiple time series as predictions will then be faster
predictions = json.loads(response.decode('utf-8'))['predictions'][0]
prediction_length = len(next(iter(predictions['quantiles'].values())))
prediction_index = pd.DatetimeIndex(start=prediction_time, freq=freq, periods=prediction_length)
if return_samples:
dict_of_samples = {'sample_' + str(i): s for i, s in enumerate(predictions['samples'])}
else:
dict_of_samples = {}
return pd.DataFrame(data={**predictions['quantiles'], **dict_of_samples}, index=prediction_index)
def set_frequency(self, freq):
self.freq = freq
def encode_target(ts):
return [x if np.isfinite(x) else "NaN" for x in ts]
def series_to_dict(ts, cat=None, dynamic_feat=None):
"""Given a pandas.Series object, returns a dictionary encoding the time series.
ts -- a pands.Series object with the target time series
cat -- an integer indicating the time series category
Return value: a dictionary
"""
obj = {"start": str(ts.index[0]), "target": encode_target(ts)}
if cat is not None:
obj["cat"] = cat
if dynamic_feat is not None:
obj["dynamic_feat"] = dynamic_feat
return obj
predictor = DeepARPredictor(endpoint_name)
import matplotlib
import matplotlib.pyplot as plt
def plot(
predictor,
target_ts,
cat=None,
dynamic_feat=None,
forecast_date=end_training,
show_samples=False,
plot_history=7 * 12,
confidence=80,
num_samples=100,
draw_color='blue'
):
print("Calling endpoint to generate {} predictions starting from {} ...".format(target_ts.name, str(forecast_date)))
assert(confidence > 50 and confidence < 100)
low_quantile = 0.5 - confidence * 0.005
up_quantile = confidence * 0.005 + 0.5
# we first construct the argument to call our model
args = {
"ts": target_ts[:forecast_date],
"return_samples": show_samples,
"quantiles": [low_quantile, 0.5, up_quantile],
"num_samples": num_samples
}
if dynamic_feat is not None:
args["dynamic_feat"] = dynamic_feat
fig = plt.figure(figsize=(20, 6))
ax = plt.subplot(2, 1, 1)
else:
fig = plt.figure(figsize=(20, 3))
ax = plt.subplot(1,1,1)
if cat is not None:
args["cat"] = cat
ax.text(0.9, 0.9, 'cat = {}'.format(cat), transform=ax.transAxes)
# call the end point to get the prediction
prediction = predictor.predict(**args)
# plot the samples
mccolor = draw_color
if show_samples:
for key in prediction.keys():
if "sample" in key:
prediction[key].asfreq('D').plot(color='lightskyblue', alpha=0.2, label='_nolegend_')
# the date didn't have a frequency in it, so setting it here.
new_date = pd.Timestamp(forecast_date, freq='d')
target_section = target_ts[new_date-plot_history:new_date+prediction_length]
target_section.asfreq('D').plot(color="black", label='target')
plt.title(target_ts.name.upper(), color='darkred')
# plot the confidence interval and the median predicted
ax.fill_between(
prediction[str(low_quantile)].index,
prediction[str(low_quantile)].values,
prediction[str(up_quantile)].values,
color=mccolor, alpha=0.3, label='{}% confidence interval'.format(confidence)
)
prediction["0.5"].plot(color=mccolor, label='P50')
ax.legend(loc=2)
# fix the scale as the samples may change it
ax.set_ylim(target_section.min() * 0.5, target_section.max() * 1.5)
if dynamic_feat is not None:
for i, f in enumerate(dynamic_feat, start=1):
ax = plt.subplot(len(dynamic_feat) * 2, 1, len(dynamic_feat) + i, sharex=ax)
feat_ts = pd.Series(
index=pd.DatetimeIndex(start=target_ts.index[0], freq=target_ts.index.freq, periods=len(f)),
data=f
)
feat_ts[forecast_date-plot_history:forecast_date+prediction_length].plot(ax=ax, color='g')
###Output
_____no_output_____
###Markdown
Let's interact w/ the samples and forecast values now.
###Code
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from ipywidgets import IntSlider, FloatSlider, Checkbox, RadioButtons
import datetime
style = {'description_width': 'initial'}
@interact_manual(
series_type=RadioButtons(options=['full_fhv', 'yellow', 'green'], value='yellow', description='Type'),
forecast_day=IntSlider(min=0, max=100, value=21, style=style),
confidence=IntSlider(min=60, max=95, value=80, step=5, style=style),
history_weeks_plot=IntSlider(min=1, max=20, value=4, style=style),
num_samples=IntSlider(min=100, max=1000, value=100, step=500, style=style),
show_samples=Checkbox(value=True),
continuous_update=False
)
def plot_interact(series_type, forecast_day, confidence, history_weeks_plot, show_samples, num_samples):
plot(
predictor,
target_ts=ABB[series_type].asfreq(freq='d', fill_value=0),
forecast_date=end_training + datetime.timedelta(days=forecast_day),
show_samples=show_samples,
plot_history=history_weeks_plot * prediction_length,
confidence=confidence,
num_samples=num_samples
)
###Output
_____no_output_____ |
util_nbs/00a_data_manage.gdrive_interact.ipynb | ###Markdown
Repo ManagementWhile I don't want to track large data files with git (also some I'd like to keep private), I still want to make use of the cloud to store my files in the case that something happens to my local machine. Thus, here I outline the ability to shuttle files between my google drive and this repo (first build solution, we'll see if it lasts). Accessing Google driveUsing pydrive https://pythonhosted.org/PyDrive/quickstart.html, I came up with the following code. General utils and conventionsNeed to go to googles API Console (see link above) and download the `client_secrets.json` and put it in this directory (perhaps also in the ml module directory). I think this only needs to be done once Prepping connection
###Code
#export
gauth = GoogleAuth()
# this needs to be added to the root of the repo
cred_fpath = local_repo_path + 'client_secrets.json'
# tell pydrive where to look for it
gauth.DEFAULT_SETTINGS['client_config_file'] = cred_fpath
# initiate the drive object and open the connection
drive = GoogleDrive(gauth)
gauth.LocalWebserverAuth() # Creates local webserver and auto handles authentication.
###Output
Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth?client_id=884310440114-oqhbrdkc3vikjmr3nvnrkb0ptr7lvp8r.apps.googleusercontent.com&redirect_uri=http%3A%2F%2Flocalhost%3A8080%2F&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&access_type=offline&response_type=code
Authentication successful.
###Markdown
Encoding google file typesThese are super long and not always intuitive so I'll store them in a dict that will make them more readable
###Code
# export
gtypes = {
'folder' : 'application/vnd.google-apps.folder'
}
gtypes['folder']
###Output
_____no_output_____
###Markdown
Grabbing root id
###Code
# export
def get_root_remote_id(folderName = 'ml_repo_data', gtypes=gtypes):
# query google drive
folders = drive.ListFile(
{'q': f"title='{folderName}' and mimeType='{gtypes['folder']}' and trashed=false"}).GetList()
folder = folders[0] # the above returns a list
return folder['id']
root_id = get_root_remote_id()
root_id[:5] # not going to print all 33 chars
###Output
_____no_output_____
###Markdown
Grabbing folder idArgument is for the id of that above it in the tree (the `parent` id)
###Code
# export
def get_folder_id(parent_id, foldername):
# grab the folder
ftype = gtypes['folder'] # unfortunately if I don't do this Jupyter freaks out with indentations/coloration
folders = drive.ListFile(
{'q': f"title='{foldername}' and mimeType='{ftype}' and '{parent_id}' in parents and trashed=false"}).GetList()
folder = folders[0] # the above returns a list
return folder['id']
DLM_id = get_folder_id(parent_id = root_id, foldername = 'DL_music')
DLM_id[:5] # not going to print all 33 chars
###Output
_____no_output_____
###Markdown
Grabbing folder contents
###Code
# export
def grab_folder_contents(parent_id):
'''Return a list of all the items in a folder based on its parent id'''
file_list = drive.ListFile({'q': f"'{parent_id}' in parents and trashed=false"}).GetList()
return file_list
file_list = grab_folder_contents(DLM_id)
# it returns a list
file = file_list[1]
# each file is a dictionary of information
file.keys()
###Output
_____no_output_____
###Markdown
check if file exists remote by name and parent
###Code
# export
def check_file_exists_remote(parent_id, fname):
file_list = grab_folder_contents(parent_id)
for file in file_list:
if file['title'] == fname : return True
continue
return False
parent_id = file['parents'][0]['id']
fname = file['title']
check_file_exists_remote(parent_id, fname)
###Output
_____no_output_____
###Markdown
Grabbing file id
###Code
# export
def get_file_id(parent_id, fname):
# grab the folder
ftype = gtypes['folder'] # unfortunately if I don't do this Jupyter freaks out with indentations/coloration
file_list = drive.ListFile(
{'q': f"title='{fname}' and '{parent_id}' in parents and trashed=false"}).GetList()
file = file_list[0] # the above returns a list
return file['id']
file_id = get_file_id(parent_id, fname)
file_id[:5]
###Output
_____no_output_____
###Markdown
downloading filesEverything draws from the pydrives "file" object which can be initiated with the file's remote id. Downloading it from there is simple
###Code
# export
def download_file(file_id, local_dpath = None):
# Create GoogleDriveFile instance with file id of file1.
file = drive.CreateFile({'id': item['id']})
local_dpath = './' if local_dpath is None else local_repo_path + local_dpath
local_fpath = local_dpath + file['title']
file.GetContentFile(local_fpath)
return local_fpath
local_dpath = 'data/DeepLearn_Music/'
file_id = item['id']
local_fpath = download_file(file_id, local_dpath)
local_fpath
###Output
_____no_output_____
###Markdown
uploading new file
###Code
# export
def upload_new_file(local_fpath, fname, parent_id):
file = drive.CreateFile({'parents': [{'id': f'{parent_id}'}]})
file['title'] = fname
file.SetContentFile(local_fpath)
file.Upload()
return
upload_new_file(local_fpath, item['title'], item['parents'][0]['id'])
###Output
_____no_output_____
###Markdown
updating existing file
###Code
# export
def update_existing_file(local_fpath, file_id):
file = drive.CreateFile({'id': item['id']})
file.SetContentFile(local_fpath)
file.Upload()
return
update_existing_file(local_fpath, item['id'])
###Output
_____no_output_____
###Markdown
Sync a file to remoteRegardless of it exists or not (it will check)
###Code
# export
def sync_file_to_remote(local_fpath, fname, parent_id):
'''will check if file exists remote first then will upload/update
accordingly'''
file_exists_remote = check_file_exists_remote(parent_id, fname)
# update if its already there
if file_exists_remote:
file_id = get_file_id(parent_id, fname)
update_existing_file(local_fpath, file_id)
# upload a new one else
else:
upload_new_file(local_fpath, fname, parent_id)
return
sync_file_to_remote(local_fpath, item['title'], item['parents'][0]['id'])
###Output
_____no_output_____ |
multiclass_logistic_regression.ipynb | ###Markdown
Softmax regression in plain PythonSoftmax regression, also called multinomial logistic regression extends [logistic regression](logistic_regression.ipynb) to multiple classes.**Given:** - dataset $\{(\boldsymbol{x}^{(1)}, y^{(1)}), ..., (\boldsymbol{x}^{(m)}, y^{(m)})\}$- with $\boldsymbol{x}^{(i)}$ being a $d-$dimensional vector $\boldsymbol{x}^{(i)} = (x^{(i)}_1, ..., x^{(i)}_d)$- $y^{(i)}$ being the target variable for $\boldsymbol{x}^{(i)}$, for example with $K = 3$ classes we might have $y^{(i)} \in \{0, 1, 2\}$A softmax regression model has the following features: - a separate real-valued weight vector $\boldsymbol{w}= (w^{(1)}, ..., w^{(d)})$ for each class. The weight vectors are typically stored as rows in a weight matrix.- a separate real-valued bias $b$ for each class- the softmax function as an activation function- the cross-entropy loss functionThe training procedure of a softmax regression model has different steps. In the beginning (step 0) the model parameters are initialized. The other steps (see below) are repeated for a specified number of training iterations or until the parameters have converged.* * * **Step 0: ** Initialize the weight matrix and bias values with zeros (or small random values).* * ***Step 1: ** For each class $k$ compute a linear combination of the input features and the weight vector of class $k$, that is, for each training example compute a score for each class. For class $k$ and input vector $\boldsymbol{x}^{(i)}$ we have:$score_{k}(\boldsymbol{x}^{(i)}) = \boldsymbol{w}_{k}^T \cdot \boldsymbol{x}^{(i)} + b_{k}$where $\cdot$ is the dot product and $\boldsymbol{w}_{(k)}$ the weight vector of class $k$.We can compute the scores for all classes and training examples in parallel, using vectorization and broadcasting:$\boldsymbol{scores} = \boldsymbol{X} \cdot \boldsymbol{W}^T + \boldsymbol{b} $where $\boldsymbol{X}$ is a matrix of shape $(n_{samples}, n_{features})$ that holds all training examples, and $\boldsymbol{W}$ is a matrix of shape $(n_{classes}, n_{features})$ that holds the weight vector for each class. * * ***Step 2: ** Apply the softmax activation function to transform the scores into probabilities. The probability that an input vector $\boldsymbol{x}^{(i)}$ belongs to class $k$ is given by$\hat{p}_k(\boldsymbol{x}^{(i)}) = \frac{\exp(score_{k}(\boldsymbol{x}^{(i)}))}{\sum_{j=1}^{K} \exp(score_{j}(\boldsymbol{x}^{(i)}))}$Again we can perform this step for all classes and training examples at once using vectorization. The class predicted by the model for $\boldsymbol{x}^{(i)}$ is then simply the class with the highest probability.* * *** Step 3: ** Compute the cost over the whole training set. We want our model to predict a high probability for the target class and a low probability for the other classes. This can be achieved using the cross entropy loss function: $J(\boldsymbol{W},b) = - \frac{1}{m} \sum_{i=1}^m \sum_{k=1}^{K} \Big[ y_k^{(i)} \log(\hat{p}_k^{(i)})\Big]$In this formula, the target labels are *one-hot encoded*. So $y_k^{(i)}$ is $1$ is the target class for $\boldsymbol{x}^{(i)}$ is k, otherwise $y_k^{(i)}$ is $0$.Note: when there are only two classes, this cost function is equivalent to the cost function of [logistic regression](logistic_regression.ipynb).* * *** Step 4: ** Compute the gradient of the cost function with respect to each weight vector and bias. A detailed explanation of this derivation can be found [here](http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/).The general formula for class $k$ is given by:$ \nabla_{\boldsymbol{w}_k} J(\boldsymbol{W}, b) = \frac{1}{m}\sum_{i=1}^m\boldsymbol{x}^{(i)} \left[\hat{p}_k^{(i)}-y_k^{(i)}\right]$For the biases, the inputs $\boldsymbol{x}^{(i)}$ will be given 1.* * *** Step 5: ** Update the weights and biases for each class $k$:$\boldsymbol{w}_k = \boldsymbol{w}_k - \eta \, \nabla_{\boldsymbol{w}_k} J$ $b_k = b_k - \eta \, \nabla_{b_k} J$where $\eta$ is the learning rate.
###Code
from sklearn.datasets import load_iris
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
np.random.seed(13)
# X W b Y
# [ ] [ ] [ ]
# n_sample [ ] * [ ] n_feature + [ ] -> [ ] n_sample
# [ ] [ ] [ ]
# n_feature n_class n_class n_class
class SoftmaxRegressor:
def __init__(self):
pass
def train(self, X, y_true, n_classes, n_iters=10, learning_rate=0.1):
"""
Trains a multinomial logistic regression model on given set of training data
"""
self.n_samples, n_features = X.shape
self.n_classes = n_classes
self.weights = np.random.rand(n_features, self.n_classes)
self.bias = np.zeros((1, self.n_classes))
all_losses = []
for i in range(n_iters):
scores = self.compute_scores(X)
probs = self.softmax(scores)
y_predict = np.argmax(probs, axis=1)[:, np.newaxis]
y_one_hot = self.one_hot(y_true)
# print("y_one_hot: {}".format(y_one_hot))
loss = self.cross_entropy(y_one_hot, probs)
all_losses.append(loss)
dw = (1 / self.n_samples) * np.dot(X.T, (probs - y_one_hot))
db = (1 / self.n_samples) * np.sum(probs - y_one_hot, axis=0)
self.weights = self.weights - learning_rate * dw
self.bias = self.bias - learning_rate * db
if i % 100 == 0:
print(f'Iteration number: {i}, loss: {np.round(loss, 4)}')
return self.weights, self.bias, all_losses
def predict(self, X):
"""
Predict class labels for samples in X.
Args:
X: numpy array of shape (n_samples, n_features)
Returns:
numpy array of shape (n_samples, 1) with predicted classes
"""
scores = self.compute_scores(X)
probs = self.softmax(scores)
return np.argmax(probs, axis=1)[:, np.newaxis]
def softmax(self, scores):
"""
Tranforms matrix of predicted scores to matrix of probabilities
Args:
scores: numpy array of shape (n_samples, n_classes)
with unnormalized scores
Returns:
softmax: numpy array of shape (n_samples, n_classes)
with probabilities
"""
exp = np.exp(scores)
sum_exp = np.sum(np.exp(scores), axis=1, keepdims=True)
softmax = exp / sum_exp
return softmax
def compute_scores(self, X):
"""
Computes class-scores for samples in X
Args:
X: numpy array of shape (n_samples, n_features)
Returns:
scores: numpy array of shape (n_samples, n_classes)
"""
return np.dot(X, self.weights) + self.bias
def cross_entropy(self, y_true, probs):
loss = - (1 / self.n_samples) * np.sum(y_true * np.log(probs))
return loss
def one_hot(self, y):
"""
Tranforms vector y of labels to one-hot encoded matrix
"""
one_hot = np.zeros((self.n_samples, self.n_classes))
one_hot[np.arange(self.n_samples), y.T] = 1
return one_hot
###Output
_____no_output_____
###Markdown
Dataset
###Code
X, y_true = make_blobs(centers=4, n_samples = 5000)
fig = plt.figure(figsize=(8,6))
plt.scatter(X[:,0], X[:,1], c=y_true)
plt.title("Dataset")
plt.xlabel("First feature")
plt.ylabel("Second feature")
plt.show()
# reshape targets to get column vector with shape (n_samples, 1)
y_true = y_true[:, np.newaxis]
# Split the data into a training and test set
X_train, X_test, y_train, y_test = train_test_split(X, y_true)
print(f'Shape X_train: {X_train.shape}')
print(f'Shape y_train: {y_train.shape}')
print(f'Shape X_test: {X_test.shape}')
print(f'Shape y_test: {y_test.shape}')
###Output
Shape X_train: (3750, 2)
Shape y_train: (3750, 1)
Shape X_test: (1250, 2)
Shape y_test: (1250, 1)
###Markdown
Softmax regression class Initializing and training the model
###Code
regressor = SoftmaxRegressor()
w_trained, b_trained, loss = regressor.train(X_train, y_train, learning_rate=0.1, n_iters=800, n_classes=4)
fig = plt.figure(figsize=(8,6))
plt.plot(np.arange(800), loss)
plt.title("Development of loss during training")
plt.xlabel("Number of iterations")
plt.ylabel("Loss")
plt.show()
###Output
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
Iteration number: 0, loss: 2.7981
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
Iteration number: 100, loss: 0.2042
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
y_one_hot: [[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
...
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
###Markdown
Testing the model
###Code
n_test_samples, _ = X_test.shape
y_predict = regressor.predict(X_test)
print(f"Classification accuracy on test set: {(np.sum(y_predict == y_test)/n_test_samples) * 100}%")
###Output
Classification accuracy on test set: 99.03999999999999%
|
tutorials/streamlit_notebooks/CLASSIFICATION_EN_SPAM.ipynb | ###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/CLASSIFICATION_EN_SPAM.ipynb) **Detect Spam messages** 1. Colab Setup
###Code
# Install PySpark and Spark NLP
! pip install -q pyspark==3.1.2 spark-nlp
import pandas as pd
import numpy as np
import json
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
###Output
_____no_output_____
###Markdown
2. Start Spark Session
###Code
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
3. Select the DL model
###Code
### Select Model
model_name = 'classifierdl_use_spam'
###Output
_____no_output_____
###Markdown
4. Some sample examples
###Code
text_list=[
"""Hiya do u like the hlday pics looked horrible in them so took mo out! Hows the camp Amrca thing? Speak soon Serena:)""",
"""U have a secret admirer who is looking 2 make contact with U-find out who they R*reveal who thinks UR so special-call on 09058094594""",]
###Output
_____no_output_____
###Markdown
5. Define Spark NLP pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
use = UniversalSentenceEncoder.pretrained(lang="en") \
.setInputCols(["document"])\
.setOutputCol("sentence_embeddings")
document_classifier = ClassifierDLModel.pretrained(model_name)\
.setInputCols(['document', 'sentence_embeddings']).setOutputCol("class")
nlpPipeline = Pipeline(stages=[
documentAssembler,
use,
document_classifier
])
###Output
tfhub_use download started this may take some time.
Approximate size to download 923.7 MB
[OK!]
classifierdl_use_spam download started this may take some time.
Approximate size to download 21.3 MB
[OK!]
###Markdown
6. Run the pipeline
###Code
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({"text":text_list}))
result = pipelineModel.transform(df)
###Output
_____no_output_____
###Markdown
7. Visualize results
###Code
result.select(F.explode(F.arrays_zip('document.result', 'class.result')).alias("cols")) \
.select(F.expr("cols['0']").alias("document"),
F.expr("cols['1']").alias("class")).show(truncate=False)
###Output
+------------------------------------------------------------------------------------------------------------------------------------+-----+
|document |class|
+------------------------------------------------------------------------------------------------------------------------------------+-----+
|Hiya do u like the hlday pics looked horrible in them so took mo out! Hows the camp Amrca thing? Speak soon Serena:) |ham |
|U have a secret admirer who is looking 2 make contact with U-find out who they R*reveal who thinks UR so special-call on 09058094594|ham |
+------------------------------------------------------------------------------------------------------------------------------------+-----+
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/CLASSIFICATION_EN_SPAM.ipynb) **Detect Spam messages** 1. Colab Setup
###Code
# Install java
!apt-get update -qq
!apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
!java -version
# Install pyspark
!pip install --ignore-installed -q pyspark==2.4.4
# Install Sparknlp
!pip install --ignore-installed spark-nlp
import pandas as pd
import numpy as np
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
import json
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
###Output
_____no_output_____
###Markdown
2. Start Spark Session
###Code
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
3. Select the DL model
###Code
### Select Model
model_name = 'classifierdl_use_spam'
###Output
_____no_output_____
###Markdown
4. Some sample examples
###Code
text_list=[
"""Hiya do u like the hlday pics looked horrible in them so took mo out! Hows the camp Amrca thing? Speak soon Serena:)""",
"""U have a secret admirer who is looking 2 make contact with U-find out who they R*reveal who thinks UR so special-call on 09058094594""",]
###Output
_____no_output_____
###Markdown
5. Define Spark NLP pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
use = UniversalSentenceEncoder.pretrained(lang="en") \
.setInputCols(["document"])\
.setOutputCol("sentence_embeddings")
document_classifier = ClassifierDLModel.pretrained(model_name)\
.setInputCols(['document', 'sentence_embeddings']).setOutputCol("class")
nlpPipeline = Pipeline(stages=[
documentAssembler,
use,
document_classifier
])
###Output
_____no_output_____
###Markdown
6. Run the pipeline
###Code
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({"text":text_list}))
result = pipelineModel.transform(df)
###Output
_____no_output_____
###Markdown
7. Visualize results
###Code
result.select(F.explode(F.arrays_zip('document.result', 'class.result')).alias("cols")) \
.select(F.expr("cols['0']").alias("document"),
F.expr("cols['1']").alias("class")).show(truncate=False)
###Output
_____no_output_____
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/CLASSIFICATION_EN_SPAM.ipynb) **Detect Spam messages** 0. Colab Setup
###Code
!sudo apt-get install openjdk-8-jdk
!java -version
!pip install --ignore-installed -q pyspark==2.4.4
!pip install spark-nlp
import pandas as pd
import numpy as np
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
import json
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
###Output
_____no_output_____
###Markdown
1. Start Spark Session
###Code
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
2. Select the DL model
###Code
### Select Model
model_name = 'classifierdl_use_spam'
###Output
_____no_output_____
###Markdown
3. Some sample examples
###Code
text_list=[
"""Hiya do u like the hlday pics looked horrible in them so took mo out! Hows the camp Amrca thing? Speak soon Serena:)""",
"""U have a secret admirer who is looking 2 make contact with U-find out who they R*reveal who thinks UR so special-call on 09058094594""",]
###Output
_____no_output_____
###Markdown
4. Define Spark NLP pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
use = UniversalSentenceEncoder.pretrained(lang="en") \
.setInputCols(["document"])\
.setOutputCol("sentence_embeddings")
document_classifier = ClassifierDLModel.pretrained(model_name)\
.setInputCols(['document', 'sentence_embeddings']).setOutputCol("class")
nlpPipeline = Pipeline(stages=[
documentAssembler,
use,
document_classifier
])
###Output
_____no_output_____
###Markdown
5. Run the pipeline
###Code
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({"text":text_list}))
result = pipelineModel.transform(df)
###Output
_____no_output_____
###Markdown
6. Visualize results
###Code
result.select(F.explode(F.arrays_zip('document.result', 'class.result')).alias("cols")) \
.select(F.expr("cols['0']").alias("document"),
F.expr("cols['1']").alias("class")).show(truncate=False)
###Output
_____no_output_____
###Markdown
0. Colab Setup
###Code
!sudo apt-get install openjdk-8-jdk
!java -version
!pip install --ignore-installed -q pyspark==2.4.4
!pip install spark-nlp
import pandas as pd
import numpy as np
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
import json
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
###Output
_____no_output_____
###Markdown
1. Start Spark Session
###Code
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
2. Select the DL model
###Code
### Select Model
model_name = 'classifierdl_use_spam'
###Output
_____no_output_____
###Markdown
3. Some sample examples
###Code
#text_list=[
#"""Hiya do u like the hlday pics looked horrible in them so took mo out! Hows the camp Amrca thing? Speak soon Serena:)""",
#"""U have a secret admirer who is looking 2 make contact with U-find out who they R*reveal who thinks UR so special-call on 09058094594""",]
text_list=[
"""Are you ready for the tea party????? It's gonna be wild)""",
"""URGENT Reply to this message for GUARANTEED FREE TEA""",]
###Output
_____no_output_____
###Markdown
4. Define Spark NLP pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
use = UniversalSentenceEncoder.pretrained(lang="en") \
.setInputCols(["document"])\
.setOutputCol("sentence_embeddings")
document_classifier = ClassifierDLModel.pretrained(model_name)\
.setInputCols(['document', 'sentence_embeddings']).setOutputCol("class")
nlpPipeline = Pipeline(stages=[
documentAssembler,
use,
document_classifier
])
###Output
tfhub_use download started this may take some time.
Approximate size to download 923.7 MB
[OK!]
classifierdl_use_spam download started this may take some time.
Approximate size to download 21.5 MB
[OK!]
###Markdown
5. Run the pipeline
###Code
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({"text":text_list}))
result = pipelineModel.transform(df)
###Output
_____no_output_____
###Markdown
6. Visualize results
###Code
result.select(F.explode(F.arrays_zip('document.result', 'class.result')).alias("cols")) \
.select(F.expr("cols['0']").alias("email doc"),
F.expr("cols['1']").alias("Spam?")).show(truncate=False)
###Output
+--------------------------------------------------------+-----+
|email doc |Spam?|
+--------------------------------------------------------+-----+
|Are you ready for the tea party????? It's gonna be wild)|ham |
|URGENT Reply to this message for GUARANTEED FREE TEA |ham |
+--------------------------------------------------------+-----+
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/CLASSIFICATION_EN_SPAM.ipynb) **Detect Spam messages** 1. Colab Setup
###Code
# Install java
!apt-get update -qq
!apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
!java -version
# Install pyspark
!pip install --ignore-installed -q pyspark==2.4.4
# Install Sparknlp
!pip install --ignore-installed spark-nlp
import pandas as pd
import numpy as np
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
import json
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
###Output
_____no_output_____
###Markdown
2. Start Spark Session
###Code
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
3. Select the DL model
###Code
### Select Model
model_name = 'classifierdl_use_spam'
###Output
_____no_output_____
###Markdown
4. Some sample examples
###Code
text_list=[
"""Hiya do u like the hlday pics looked horrible in them so took mo out! Hows the camp Amrca thing? Speak soon Serena:)""",
"""U have a secret admirer who is looking 2 make contact with U-find out who they R*reveal who thinks UR so special-call on 09058094594""",]
###Output
_____no_output_____
###Markdown
5. Define Spark NLP pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
use = UniversalSentenceEncoder.pretrained(lang="en") \
.setInputCols(["document"])\
.setOutputCol("sentence_embeddings")
document_classifier = ClassifierDLModel.pretrained(model_name)\
.setInputCols(['document', 'sentence_embeddings']).setOutputCol("class")
nlpPipeline = Pipeline(stages=[
documentAssembler,
use,
document_classifier
])
###Output
tfhub_use download started this may take some time.
Approximate size to download 923.7 MB
[OK!]
classifierdl_use_spam download started this may take some time.
Approximate size to download 21.5 MB
[OK!]
###Markdown
6. Run the pipeline
###Code
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({"text":text_list}))
result = pipelineModel.transform(df)
###Output
_____no_output_____
###Markdown
7. Visualize results
###Code
result.select(F.explode(F.arrays_zip('document.result', 'class.result')).alias("cols")) \
.select(F.expr("cols['0']").alias("document"),
F.expr("cols['1']").alias("class")).show(truncate=False)
###Output
+------------------------------------------------------------------------------------------------------------------------------------+-----+
|document |class|
+------------------------------------------------------------------------------------------------------------------------------------+-----+
|Hiya do u like the hlday pics looked horrible in them so took mo out! Hows the camp Amrca thing? Speak soon Serena:) |ham |
|U have a secret admirer who is looking 2 make contact with U-find out who they R*reveal who thinks UR so special-call on 09058094594|spam |
+------------------------------------------------------------------------------------------------------------------------------------+-----+
###Markdown
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/CLASSIFICATION_EN_SPAM.ipynb) **Detect Spam messages** 1. Colab Setup
###Code
!wget http://setup.johnsnowlabs.com/colab.sh -O - | bash
# !bash colab.sh
# -p is for pyspark
# -s is for spark-nlp
# !bash colab.sh -p 3.1.1 -s 3.0.1
# by default they are set to the latest
import pandas as pd
import numpy as np
import json
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
###Output
_____no_output_____
###Markdown
2. Start Spark Session
###Code
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
3. Select the DL model
###Code
### Select Model
model_name = 'classifierdl_use_spam'
###Output
_____no_output_____
###Markdown
4. Some sample examples
###Code
text_list=[
"""Hiya do u like the hlday pics looked horrible in them so took mo out! Hows the camp Amrca thing? Speak soon Serena:)""",
"""U have a secret admirer who is looking 2 make contact with U-find out who they R*reveal who thinks UR so special-call on 09058094594""",]
###Output
_____no_output_____
###Markdown
5. Define Spark NLP pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
use = UniversalSentenceEncoder.pretrained(lang="en") \
.setInputCols(["document"])\
.setOutputCol("sentence_embeddings")
document_classifier = ClassifierDLModel.pretrained(model_name)\
.setInputCols(['document', 'sentence_embeddings']).setOutputCol("class")
nlpPipeline = Pipeline(stages=[
documentAssembler,
use,
document_classifier
])
###Output
tfhub_use download started this may take some time.
Approximate size to download 923.7 MB
[OK!]
classifierdl_use_spam download started this may take some time.
Approximate size to download 21.3 MB
[OK!]
###Markdown
6. Run the pipeline
###Code
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({"text":text_list}))
result = pipelineModel.transform(df)
###Output
_____no_output_____
###Markdown
7. Visualize results
###Code
result.select(F.explode(F.arrays_zip('document.result', 'class.result')).alias("cols")) \
.select(F.expr("cols['0']").alias("document"),
F.expr("cols['1']").alias("class")).show(truncate=False)
###Output
+------------------------------------------------------------------------------------------------------------------------------------+-----+
|document |class|
+------------------------------------------------------------------------------------------------------------------------------------+-----+
|Hiya do u like the hlday pics looked horrible in them so took mo out! Hows the camp Amrca thing? Speak soon Serena:) |ham |
|U have a secret admirer who is looking 2 make contact with U-find out who they R*reveal who thinks UR so special-call on 09058094594|ham |
+------------------------------------------------------------------------------------------------------------------------------------+-----+
|
Vehicle_Detection.ipynb | ###Markdown
Vehicle Detection Project The goals / steps of this project are the following:* Perform a Histogram of Oriented Gradients (HOG) feature extraction on a labeled training set of images and train a classifier Linear SVM classifier* Optionally, you can also apply a color transform and append binned color features, as well as histograms of color, to your HOG feature vector.* Note: for those first two steps don't forget to normalize your features and randomize a selection for training and testing.* Implement a sliding-window technique and use your trained classifier to search for vehicles in images.* Run your pipeline on a video stream (start with the test_video.mp4 and later implement on full project_video.mp4) and create a heat map of recurring detections frame by frame to reject outliers and follow detected vehicles.* Estimate a bounding box for vehicles detected. Define helper functions
###Code
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import cv2
from skimage.feature import hog
import pickle
def convert_color(img, conv='YCrCb'):
# Define a function to convert color space
if conv == 'YCrCb':
return cv2.cvtColor(img, cv2.COLOR_BGR2YCrCb)
if conv == 'LUV':
return cv2.cvtColor(img, cv2.COLOR_BGR2LUV)
if conv == 'YUV':
return cv2.cvtColor(img, cv2.COLOR_BGR2YUV)
if conv == 'HLS':
return cv2.cvtColor(img, cv2.COLOR_BGR2HLS)
if conv == 'HSV':
return cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
if conv == 'RGB':
return cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
def get_hog_features(img, orient, pix_per_cell, cell_per_block, vis=False, feature_vector=True):
# Define a function to return HOG features and visualization
# If feature_vector is True, a 1D (flattened) array is returned.
if vis == True:
features, hog_image = hog(img, orientations=orient,
pixels_per_cell=(pix_per_cell, pix_per_cell),
block_norm= 'L2-Hys',
cells_per_block=(cell_per_block, cell_per_block),
transform_sqrt=False,
visualize=vis, feature_vector=feature_vector)
return features, hog_image
else:
features = hog(img, orientations=orient,
pixels_per_cell=(pix_per_cell, pix_per_cell),
block_norm= 'L2-Hys',
cells_per_block=(cell_per_block, cell_per_block),
transform_sqrt=False,
visualize=vis, feature_vector=feature_vector)
return features
def bin_spatial(img, size=(16,16)):
# Define a function to compute binned color features
return cv2.resize(img, size).ravel()
def color_hist(img, nbins=32, bins_range=(0, 256)):
# Define a function to compute color histogram features
channel1 = np.histogram(img[:,:,0], bins=nbins, range=bins_range)
channel2 = np.histogram(img[:,:,1], bins=nbins, range=bins_range)
channel3 = np.histogram(img[:,:,2], bins=nbins, range=bins_range)
hist_features = np.concatenate((channel1[0], channel2[0], channel3[0]))
return hist_features
def extract_features(imgs, color_space='BGR', spatial_size=(16, 16),
hist_bins=128, orient=9, pix_per_cell=8,
cell_per_block=2, hog_channel=0,
spatial_feat=True, hist_feat=True, hog_feat=True):
features = []
for img in imgs:
img_features = []
image = cv2.imread(img)
if color_space != 'BGR':
feature_image = convert_color(image, color_space)
else:
feature_image = np.copy(image)
if spatial_feat:
spatial_feature = bin_spatial(feature_image, spatial_size)
img_features.append(spatial_feature)
if hist_feat:
hist_feature = color_hist(feature_image, hist_bins)
img_features.append(hist_feature)
if hog_feat:
if hog_channel == 'ALL':
hog_ch1 = get_hog_features(feature_image[:,:,0], orient,
pix_per_cell, cell_per_block, vis=False, feature_vector=True)
hog_ch2 = get_hog_features(feature_image[:,:,1], orient,
pix_per_cell, cell_per_block, vis=False, feature_vector=True)
hog_ch3 = get_hog_features(feature_image[:,:,2], orient,
pix_per_cell, cell_per_block, vis=False, feature_vector=True)
hog_feature = np.concatenate((hog_ch1, hog_ch2, hog_ch3))
else:
hog_feature = get_hog_features(feature_image[:,:,hog_channel], orient,
pix_per_cell, cell_per_block, vis=False, feature_vector=True)
img_features.append(hog_feature)
features.append(np.concatenate(img_features))
return features
def draw_boxes(img, bboxes, color=(0, 0, 255), thick=6):
# Make a copy of the image
imcopy = np.copy(img)
for bbox in bboxes:
top_left = bbox[0]
bottom_right = bbox[1]
cv2.rectangle(imcopy, (top_left[0], top_left[1]), (bottom_right[0], bottom_right[1]), color, thick)
return imcopy
###Output
_____no_output_____
###Markdown
Extract data
###Code
import glob
notcars = list(glob.glob('data/non-vehicle/*.png'))
cars = list(glob.glob('data/vehicle/**/*.png'))
#print(len(notcars))
#print(len(cars))
###Output
_____no_output_____
###Markdown
train a Linear SVM classifier
###Code
from sklearn.svm import LinearSVC
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
orient = 12
pix_per_cell = 8
cell_per_block = 2
spatial_size = (16, 16)
hist_bins = 128
color_space = 'YCrCb'
# Extract features from image dataset
car_features = extract_features(cars, color_space=color_space, spatial_size=spatial_size,
hist_bins=hist_bins, orient=orient, pix_per_cell=pix_per_cell,
cell_per_block=cell_per_block, hog_channel='ALL',
spatial_feat=True, hist_feat=True, hog_feat=True)
noncar_features = extract_features(notcars, color_space=color_space, spatial_size=spatial_size,
hist_bins=hist_bins, orient=orient, pix_per_cell=pix_per_cell,
cell_per_block=cell_per_block, hog_channel='ALL',
spatial_feat=True, hist_feat=True, hog_feat=True)
# Create an array stack of feature vectors
# NOTE: StandardScaler() expects np.float64
X = np.vstack((car_features, noncar_features)).astype(np.float64)
# Define the labels vector
y = np.hstack((np.ones(len(car_features)), np.zeros(len(noncar_features))))
# Split up data into randomized training and test sets
# It's important to do the scaling after splitting the data, otherwise you are
# allowing the scaler to peer into your test data!
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Normalize data
# Fit a per-column scaler
scaler = StandardScaler()
X_scaler = scaler.fit(X_train)
# Apply the scaler to X
X_train = X_scaler.transform(X_train)
X_test = X_scaler.transform(X_test)
# Use a linear SVC
svc = LinearSVC()
svc.fit(X_train, y_train)
print('Test Accuracy of SVC = ', svc.score(X_test, y_test))
data_pickle = {
"svc" : svc,
"scaler" : X_scaler,
"orient" : orient,
"pix_per_cell" : pix_per_cell,
"cell_per_block" : cell_per_block,
"spatial_size" : spatial_size,
"hist_bins" : hist_bins,
"color_space" : color_space,
}
filename = "svc_pickle.p"
with open(filename, 'wb') as f:
pickle.dump(data_pickle, f)
print("data saved to svc_pickle.p")
###Output
Test Accuracy of SVC = 0.9938063063063063
data saved to svc_pickle.p
###Markdown
Implement a sliding-window technique
###Code
def find_cars(img, ystart, ystop, color_space, scale, svc, X_scaler, orient,
pix_per_cell, cell_per_block, spatial_size, hist_bins):
draw_img = np.copy(img)
# img = img.astype(np.float32) / 255
img_tosearch = img[ystart:ystop, :, :]
ctrans_tosearch = convert_color(img_tosearch, conv=color_space)
if scale != 1:
image_shape = ctrans_tosearch.shape
ctrans_tosearch = cv2.resize(ctrans_tosearch, (np.int(image_shape[1]/scale), np.int(image_shape[0]/scale)))
ch1 = ctrans_tosearch[:,:,0]
ch2 = ctrans_tosearch[:,:,1]
ch3 = ctrans_tosearch[:,:,2]
# Compute individual channel HOG features for the entire image
# hog dimension = (nyblocks x nxblocks x cell_per_block x cell_per_block x orient)
hog1 = get_hog_features(ch1, orient, pix_per_cell, cell_per_block, vis=False, feature_vector=False)
hog2 = get_hog_features(ch2, orient, pix_per_cell, cell_per_block, vis=False, feature_vector=False)
hog3 = get_hog_features(ch3, orient, pix_per_cell, cell_per_block, vis=False, feature_vector=False)
x_nblocks = (ch1.shape[1] // pix_per_cell) - cell_per_block + 1
y_nblocks = (ch1.shape[0] // pix_per_cell) - cell_per_block + 1
window_size = 64 # pixels
block_per_window = (window_size // pix_per_cell) - cell_per_block + 1
'''
if scale > 1:
cells_per_step = 2
else:
cells_per_step = 1
'''
cells_per_step = 2
nx_step = 1 + (x_nblocks - block_per_window) // cells_per_step
ny_step = 1 + (y_nblocks - block_per_window) // cells_per_step
car_windows = []
for yb in range(ny_step):
for xb in range(nx_step):
xpos = xb * cells_per_step
ypos = yb * cells_per_step
hog_feature1 = hog1[ypos:ypos+block_per_window, xpos:xpos+block_per_window].ravel()
hog_feature2 = hog2[ypos:ypos+block_per_window, xpos:xpos+block_per_window].ravel()
hog_feature3 = hog3[ypos:ypos+block_per_window, xpos:xpos+block_per_window].ravel()
hog_feature = np.concatenate((hog_feature1, hog_feature2, hog_feature3))
# Extract the image patch
x_top_left = xpos * pix_per_cell # convert cell to pixel
y_top_left = ypos * pix_per_cell
subimg = cv2.resize(ctrans_tosearch[y_top_left:y_top_left+window_size, x_top_left:x_top_left+window_size], (64,64))
# Get color feature
spatial_feature = bin_spatial(subimg, size=spatial_size)
hist_feature = color_hist(subimg, nbins=hist_bins)
# concatenate all features
features = np.hstack((spatial_feature, hist_feature, hog_feature)).reshape(1, -1)
# Scale features and make a prediction
test_features = X_scaler.transform(features)
test_prediction = svc.predict(test_features)
if test_prediction == 1:
xbox_left = np.int(x_top_left * scale)
ybox_left = np.int(y_top_left * scale)
window = np.int(window_size * scale)
car_windows.append(((xbox_left, ybox_left+ystart), (xbox_left+window, ybox_left+ystart+window)))
cv2.rectangle(draw_img, (xbox_left, ybox_left+ystart), (xbox_left+window, ybox_left+ystart+window), (0,0,255), 6)
# print(car_windows)
return car_windows
###Output
_____no_output_____
###Markdown
Test the pipeline on image
###Code
# load a pe-trained svc model from a serialized (pickle) file
dist_pickle = pickle.load( open("svc_pickle.p", "rb" ) )
# get attributes of our svc object
svc = dist_pickle["svc"]
X_scaler = dist_pickle["scaler"]
orient = dist_pickle["orient"]
pix_per_cell = dist_pickle["pix_per_cell"]
cell_per_block = dist_pickle["cell_per_block"]
spatial_size = dist_pickle["spatial_size"]
hist_bins = dist_pickle["hist_bins"]
color_space = dist_pickle["color_space"]
ystart = 400
ystop = 656
scale = 1.5
# ystart, ystop, scale, overlap, color
searches = [
(380, 500, 1.0, (0, 0, 255)), # 64x64
(400, 550, 1.6, (0, 255, 0)), # 101x101
(400, 680, 2.5, (255, 0, 0)), # 161x161
(400, 680, 3.8, (255, 255, 0)), # 256x256
]
bbox_list = []
filename = 'test_images/scene00006.jpg'
img = cv2.imread(filename)
draw_img = np.copy(img)
for ystart, ystop, scale, color in searches:
bboxes = find_cars(img, ystart, ystop, color_space, scale, svc, X_scaler, orient,
pix_per_cell, cell_per_block, spatial_size, hist_bins)
if len(bboxes) > 0:
bbox_list.append(bboxes)
draw_img = draw_boxes(draw_img, bboxes, color=color, thick=3)
plt.figure(figsize=(12, 6))
plt.imshow(cv2.cvtColor(draw_img, cv2.COLOR_BGR2RGB))
#plt.savefig('./output_images/result4.jpg')
plt.show()
###Output
_____no_output_____
###Markdown
Build a heat-map and remove false positives
###Code
from scipy.ndimage.measurements import label
def add_heat(heatmap, bbox_list):
for search in bbox_list:
for box in search:
# Add += 1 for all pixels inside each bbox
# Assuming each "box" takes the form ((x1, y1), (x2, y2))
heatmap[box[0][1]:box[1][1], box[0][0]:box[1][0]] += 1
return heatmap
def apply_threshold(heatmap, threshold):
heatmap[heatmap <= threshold] = 0
return heatmap
def draw_labeled_bboxes(img, labels):
labeled_array, num_features = labels
for car_number in range(1, num_features+1):
# Find pixels with each car_number label value
# .nonzero(): Return the indices of the elements that are non-zero
nonzero = (labeled_array == car_number).nonzero()
# Identify x and y values of those pixels
y = np.array(nonzero[0])
x = np.array(nonzero[1])
bbox = ((np.min(x), np.min(y)), (np.max(x), np.max(y)))
cv2.rectangle(img, bbox[0], bbox[1], (255,0,0), 6)
return img
# ystart, ystop, scale, overlap, color
searches = [
(380, 500, 1.0, (0, 0, 255)), # 64x64
(400, 550, 1.6, (0, 255, 0)), # 101x101
(400, 680, 2.5, (255, 0, 0)), # 161x161
(400, 680, 3.8, (255, 255, 0)), # 256x256
]
filename = 'test_images/scene00006.jpg'
img = cv2.imread(filename)
total_boxes = []
for ystart, ystop, scale, color in searches:
bboxes = find_cars(img, ystart, ystop, color_space, scale, svc, X_scaler, orient,
pix_per_cell, cell_per_block, spatial_size, hist_bins)
total_boxes.append(bboxes)
heat = np.zeros_like(img[:,:,0]).astype(np.float)
# Add heat to each box in box list
heat = add_heat(heat, total_boxes)
# Apply threshold to help remove false positives
heat = apply_threshold(heat,1)
# Visualize the heatmap when displaying
# np.clip : Clip (limit) the values in an array.
# Given an interval, values outside the interval are clipped to the interval edges.
heatmap = np.clip(heat, 0, 255)
# Find final boxes from heatmap using label function
labels = label(heatmap)
draw = draw_labeled_bboxes(img, labels)
fig = plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.imshow(cv2.cvtColor(draw, cv2.COLOR_BGR2RGB))
plt.title('Car Positions')
plt.subplot(122)
plt.imshow(heatmap, cmap='hot')
plt.title('Heat Map')
fig.tight_layout()
plt.show()
from collections import deque
frames = deque([], 3)
def vehicle_detection_pipeline(img):
# ystart, ystop, scale, overlap, color
searches = [
(380, 480, 1.0, (0, 0, 255)), # 64x64
(390, 550, 1.6, (0, 255, 0)), # 101x101
(400, 610, 2.5, (255, 0, 0)), # 161x161
(400, 680, 3.8, (255, 255, 0)), # 256x256
]
total_boxes = []
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
for ystart, ystop, scale, color in searches:
bboxes = find_cars(img, ystart, ystop, color_space, scale, svc, X_scaler, orient,
pix_per_cell, cell_per_block, spatial_size, hist_bins)
total_boxes.append(bboxes)
if len(frames) == 0:
all_frames_heatmap = np.zeros_like(img[:,:,0]).astype(np.float)
else:
all_frames_heatmap = frames[-1]
current__frame_heat = np.zeros_like(img[:,:,0]).astype(np.float)
if len(total_boxes) > 0:
current_heatmap = add_heat(current__frame_heat, total_boxes)
if len(frames) == 3:
all_frames_heatmap -= frames[0] * 0.3**5
all_frames_heatmap = all_frames_heatmap*0.8 + current__frame_heat
frames.append(all_frames_heatmap)
# Apply threshold to help remove false positives
heat = apply_threshold(all_frames_heatmap, len(frames))
heatmap = np.clip(heat, 0, 255)
# Find final boxes from heatmap using label function
labels = label(heatmap)
draw = draw_labeled_bboxes(img, labels)
# convert BGR to RGB image
draw = cv2.cvtColor(draw, cv2.COLOR_BGR2RGB)
return draw
image = mpimg.imread('test_images/scene00003.jpg')
plt.imshow(vehicle_detection_pipeline(image))
plt.savefig('./output_images/final_output.jpg')
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
# run image pipeline with video
output = 'test_video_result.mp4'
clip1 = VideoFileClip("test_video.mp4")
white_clip = clip1.fl_image(vehicle_detection_pipeline) #NOTE: this function expects color images!!
%time white_clip.write_videofile(output, audio=False)
HTML("""
<video width="1280" height="720" controls>
<source src="{0}">
</video>
""".format(output))
# run image pipeline with video
frames = deque([], 4)
output = 'p_video_result.mp4'
clip1 = VideoFileClip("project_video.mp4")
white_clip = clip1.fl_image(vehicle_detection_pipeline) #NOTE: this function expects color images!!
%time white_clip.write_videofile(output, audio=False)
HTML("""
<video width="1280" height="720" controls>
<source src="{0}">
</video>
""".format(output))
###Output
_____no_output_____
###Markdown
**Vehicle Detection Project**The goals / steps of this project are the following:* Perform a Histogram of Oriented Gradients (HOG) feature extraction on a labeled training set of images and train a classifier Linear SVM classifier* Optionally, you can also apply a color transform and append binned color features, as well as histograms of color, to your HOG feature vector. * Note: for those first two steps don't forget to normalize your features and randomize a selection for training and testing.* Implement a sliding-window technique and use your trained classifier to search for vehicles in images.* Run your pipeline on a video stream (start with the test_video.mp4 and later implement on full project_video.mp4) and create a heat map of recurring detections frame by frame to reject outliers and follow detected vehicles.* Estimate a bounding box for vehicles detected.
###Code
import numpy as np
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as img
import glob
#utility function to fetch the image path+names for cars and non cars respectively.
def get_image_names():
non_vehicles1=np.array(glob.glob('TrainingData/non-vehicles/non-vehicles/Extras/ex*.png'))
non_vehicles2=np.array(glob.glob('TrainingData/non-vehicles/non-vehicles/GTI/im*.png'))
non_vehicles=np.append(non_vehicles1,non_vehicles2)
vehicles=np.array(glob.glob('TrainingData/vehicles/vehicles/*/*.png'))
return non_vehicles,vehicles
###Output
_____no_output_____
###Markdown
Visualizing Training DataSo in the training set we have *8968 Non Vehicle Images* and *8792 vehicle Images*
###Code
data=get_image_names()
print('non_vehicle images=',len(data[0]),'and vehhile images=',len(data[1]))
def load_images():
non_vehicle,vehicle=get_image_names()
cars=[]
non_cars=[]
for name in vehicle:
cars.append(cv2.imread(name))
for name in non_vehicle:
non_cars.append(cv2.imread(name))
return cars,non_cars
###Output
_____no_output_____
###Markdown
Training Data ShapeEach training image has 64x64x3 shape.
###Code
cars,non_cars=load_images()
print(cars[0].shape)
###Output
(64, 64, 3)
###Markdown
Visualizing ImagesBelow is an example of Car and Non Car Image
###Code
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,5))
ax1.imshow(cv2.cvtColor(cars[0],cv2.COLOR_BGR2RGB))
ax1.set_title('Car Image', fontsize=30)
ax2.imshow(cv2.cvtColor(non_cars[0],cv2.COLOR_BGR2RGB))
ax2.set_title('Non car Image', fontsize=30)
###Output
_____no_output_____
###Markdown
HOG FeaturesTo detect the vehicles I used Histogram of Oriented Gradients as one of the feature. I took HOG on the 'YCrCb' color space and to be more specific I used 'Cr' color channel to extract HOG features. I tried different color spaces and different color channels while going through the classroom quizes, however while trying different combinations and found that the classifier accuracy is best if I use color channel 'Cr' for the hog features. Function below takes image and color space name as input, orientation and other parameters are optional. However during the training I used ``pix_per_cell=16`` ``orient=9`` ``Color_space=YCrCb`` ``cells_per_block=2`` and ``Channel=1``I used this configurations because I realized that the accuracy of classifier is above 95% if I am feeding it data, taken out of hog with this configuration. The feature vector length is *576* if I use this configuration.
###Code
from skimage.feature import hog
def get_hog_features(image,cspace, orient=9, pix_per_cell=8, cell_per_block=2, vis=True,
feature_vec=True,channel=0):
if cspace != 'BGR':
if cspace == 'HSV':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
elif cspace == 'LUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2LUV)
elif cspace == 'HLS':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2HLS)
elif cspace == 'YUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2YUV)
elif cspace == 'YCrCb':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2YCrCb)
elif cspace == 'RGB':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
else: feature_image = np.copy(image)
return_list = hog(feature_image[:,:,channel], orientations=orient, pixels_per_cell=(pix_per_cell, pix_per_cell),
cells_per_block=(cell_per_block, cell_per_block),
block_norm= 'L2-Hys', transform_sqrt=False,
visualise= vis, feature_vector= feature_vec)
# name returns explicitly
hog_features = return_list[0]
if vis:
hog_image = return_list[1]
return hog_features, hog_image
else:
return hog_features
###Output
_____no_output_____
###Markdown
Output of HOG Below is the example of HOG output
###Code
hog_features,hog_image=get_hog_features(cars[1],'YCrCb',channel=1,pix_per_cell=16)
print('shape of hog features ',hog_features.shape)
plt.imshow(hog_image,cmap='gray')
###Output
shape of hog features (324,)
###Markdown
Spatial BinningI used Spatial Binning to extract more features from the image. So in Spatial Binning we take the raw pixel values from the image. The basic concept here is; in images, even if we decrease the size of the image within certain range, it still retains most of its information. So here input image was 64x64 image which I resized as 16x16 image and then I used it as feature vector for the classifier along with HOG feature vector.I used ``ravel()`` function to convert the 2D array to vector.I used 'YUV' color space for spatial binning, the below function takes the image input and convert it to the given Color space. After few observations it was clear 'YUV' gives good result in our case, this can be seen in the sample outputs below:
###Code
def bin_spatial(image, cspace='BGR', size=(16, 16)):
# Convert image to new color space (if specified)
if cspace != 'BGR':
if cspace == 'HSV':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
elif cspace == 'LUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2LUV)
elif cspace == 'HLS':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2HLS)
elif cspace == 'YUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2YUV)
elif cspace == 'YCrCb':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2YCrCb)
elif cspace == 'RGB':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
else: feature_image = np.copy(image)
# Use cv2.resize().ravel() to create the feature vector
small_img=cv2.resize(feature_image,size) # Remove this line!
# Return the feature vector
features=small_img.ravel()
return features
###Output
_____no_output_____
###Markdown
Spatial Binning output for Car Images
###Code
plt.plot(bin_spatial(cars[0],'YUV'))
###Output
_____no_output_____
###Markdown
Spatial Binning output for Non Car Images
###Code
plt.plot(bin_spatial(non_cars[0],'YUV'))
###Output
_____no_output_____
###Markdown
Color HistogramI also used Color Histograms to fetch features out of an image. As the name implies we take an image and based on the given color channel and bin size specifications we calculate the histogram for each given channel and bin size and then append them together to form a feature vector. I used HLS color space and 'S' color channel for the color histogram feature vector. After doing some experimentaion I found that Saturation can be a reliable feature to identify the Vehicles.I used ``Number of bins=32`` ``color space=HLS`` and ``bins range=0-256``Below is the sample output of color histogram for a given image and given color space(HLS in our case).
###Code
def color_hist(image, nbins=32, channel=None,bins_range=(0, 256),cspace='BGR',v=False):
# Compute the histogram of the RGB channels separately
if cspace != 'BGR':
if cspace == 'HSV':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
elif cspace == 'LUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2LUV)
elif cspace == 'HLS':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2HLS)
elif cspace == 'YUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2YUV)
elif cspace == 'YCrCb':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2YCrCb)
elif cspace == 'RGB':
feature_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
else: feature_image = np.copy(image)
if(channel==None):
first_hist = np.histogram(feature_image[:,:,0],bins=nbins,range=bins_range)
second_hist = np.histogram(feature_image[:,:,1],bins=nbins,range=bins_range)
third_hist = np.histogram(feature_image[:,:,2],bins=nbins,range=bins_range)
bin_edges=first_hist[1]
bin_centers = (bin_edges[1:]+bin_edges[0:len(bin_edges)-1])/2
if(v):
return first_hist, second_hist, third_hist,bin_centers
else:
hist_features = np.concatenate((first_hist[0], second_hist[0], third_hist[0]))
# Return the individual histograms, bin_centers and feature vector
return hist_features
else:
first_hist = np.histogram(feature_image[:,:,channel],bins=nbins,range=bins_range)
bin_edges=first_hist[1]
# Generating bin centers
bin_centers = (bin_edges[1:]+bin_edges[0:len(bin_edges)-1])/2
# Concatenate the histograms into a single feature vector
# hist_features = np.concatenate((rhist[0],ghist[0],bhist[0]))
# Return the individual histograms, bin_centers and feature vector
if(v):
return first_hist,bin_centers
return first_hist[0]
###Output
_____no_output_____
###Markdown
Output of Color Histogram function
###Code
histogram=color_hist(cars[0],cspace='HLS',v=True)
fig = plt.figure(figsize=(12,3))
plt.subplot(131)
plt.bar(histogram[3], histogram[0][0])
plt.xlim(0, 256)
plt.title('H Histogram')
plt.subplot(132)
plt.bar(histogram[3], histogram[1][0])
plt.xlim(0, 256)
plt.title('L Histogram')
plt.subplot(133)
plt.bar(histogram[3], histogram[2][0])
plt.xlim(0, 256)
plt.title('S Histogram')
fig.tight_layout()
histogram=color_hist(cars[0],cspace='YUV',channel=1,v=True)
fig = plt.figure(figsize=(24,6))
plt.subplot(131)
plt.bar(histogram[1], histogram[0][0])
plt.xlim(0, 256)
plt.title('S Histogram')
###Output
_____no_output_____
###Markdown
ClassifierI used Support Vector Machine as my classifier, I choose this because it has simple implementaion and training time for this classifier is also considerably small while compared with Neural Networks and other classifiers. Initially I was using 'linear' kernel, but even after acheiving 96% test accuracy with the linear kernel there were too many false postive detections. Then I thought of increasing the size of feature vector or to use Radial Basis function('rbf') as kernel. However I used 'rbf' kernel since it gave 99% test accuracy and the number of false positive detection also decreased drastically.
###Code
from sklearn.svm import LinearSVC
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
def train_model(X_train,y_train):
svc=SVC(kernel='rbf')
svc.fit(X_train,y_train)
return svc
###Output
_____no_output_____
###Markdown
Extract FeaturesFunction ``extract_fetures()`` is used to fetch feature vector from each image during the training phase of the classifier. This function simply extracts the feature vector for each image and it dumps these features into a pickle file, later we use these features to traing our classifier.
###Code
import pickle
import time
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split
def extract_features():
cars,non_cars=load_images()
cars_features=[]
non_cars_features=[]
for car in cars:
color_hist_features1=color_hist(car,cspace='HLS',channel=2)
#color_hist_features2=color_hist(car,cspace='YUV',channel=1)
hog_features=get_hog_features(car,'YCrCb',channel=1,pix_per_cell=16)[0]
spatial_features=bin_spatial(car,'YUV')
temp=np.array([])
temp=np.append(temp,color_hist_features1)
#temp=np.append(temp,color_hist_features2)
temp=np.append(temp,hog_features)
temp=np.append(temp,spatial_features)
cars_features.append(temp)
for non_car in non_cars:
color_hist_features1=color_hist(non_car,cspace='HLS',channel=2)
#color_hist_features2=color_hist(non_car,cspace='YUV',channel=1)
hog_features=get_hog_features(non_car,'YCrCb',channel=1,pix_per_cell=16)[0]
spatial_features=bin_spatial(non_car,'YUV')
temp=np.array([])
temp=np.append(temp,color_hist_features1)
#temp=np.append(temp,color_hist_features2)
temp=np.append(temp,hog_features)
temp=np.append(temp,spatial_features)
non_cars_features.append(temp)
file=open('data.pkl','wb')
obj1=['cars',cars_features]
obj2=['non_cars',non_cars_features]
pickle.dump(obj1, file)
pickle.dump(obj2, file)
file.close()
###Output
_____no_output_____
###Markdown
Train Model and save Function ``train_and_save()`` uses the features received from function ``extract_features`` to train the classifier and later it save it to a pickle file.I have used ``StandardScaler()`` to scale all the features in the feature vector for all the images, it is important since if there is so much variation among the values of the features then there are chances that the classifier gets biased towards the higher value features. I had to save the scaler once I fetch it, since same scaler shall be used to make predictions which was used to scale the input during the training.Length of the feature vector is 1124
###Code
def train_and_save(flag_extract_features=False):
if(flag_extract_features):
extract_features()
pickle_in = open("data.pkl","rb")
example_dict = pickle.load(pickle_in)
cars_features=example_dict[1]
example_dict = pickle.load(pickle_in)
non_cars_features=example_dict[1]
pickle_in.close()
print('Length of feature vector=',cars_features[0].shape[0])
X = np.vstack((cars_features, non_cars_features)).astype(np.float64)
# Define the labels vector
y = np.hstack((np.ones(len(cars_features)), np.zeros(len(non_cars_features))))
# Split up data into randomized training and test sets
rand_state = np.random.randint(0, 100)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=rand_state)
X_scaler = StandardScaler().fit(X_train)
X_train = X_scaler.transform(X_train)
X_test = X_scaler.transform(X_test)
t=time.time()
clf=train_model(X_train,y_train)
t2 = time.time()
print(round(t2-t, 2), 'Seconds to train SVC...')
# Check the score of the SVC
print('Test Accuracy of SVC = ', round(clf.score(X_test, y_test), 4))
file=open('classifier.pkl','wb')
obj1=['model',clf]
obj2=['scaler',X_scaler]
pickle.dump(obj1,file)
pickle.dump(obj2,file)
file.close()
return clf,X_scaler
train_and_save()
###Output
Length of feature vector= 1124
41.59 Seconds to train SVC...
Test Accuracy of SVC = 0.9927
###Markdown
Sliding WindowOnce I was done training the classifier, next challenge was how to find the vehicles in a given image. Well, I used sliding window approach to find vehices in an image. In this we use different sized windows and move them accross the image, fetch feature vector for that window and feed those features to our trained classifier, if classifier predicts that Yes! it is a vehicle then mark that window. It was challenging to fit the good window size for the sliding window, after experimenting different combinations for the sliding window size I finally used two window sizes:1. 50x50 window for y=400 to y=500 since near the horizon the cars will be far and small in size, in this case overlap is 50% for both x and y.2. 80x100 window for y=500 to y=650 since in this region cars will appear larger in size, in this case overlap is 70% for both x and y.I have used different sized windows because vehicles in differnt regions of the image appears different, i.e. vehicles near the car appears bigger and far from the car appears smaller. I tried different overlaping factors, use of small overlaping factor worked well, if the window size is small too, for large windows overlaping factor should also be large. I realized the overlaping factor also depends on what threshold you use during the heatmap implementaion.
###Code
def slide_window(img,window_list, x_start_stop=[None, None], y_start_stop=[None, None],
xy_window=(100, 70), xy_overlap=(0.8, 0.8)):
# If x and/or y start/stop positions not defined, set to image size
if x_start_stop[0] == None:
x_start_stop[0] = 0
if x_start_stop[1] == None:
x_start_stop[1] = img.shape[1]
if y_start_stop[0] == None:
y_start_stop[0] = 0
if y_start_stop[1] == None:
y_start_stop[1] = img.shape[0]
# Compute the span of the region to be searched
xspan = x_start_stop[1] - x_start_stop[0]
yspan = y_start_stop[1] - y_start_stop[0]
# Compute the number of pixels per step in x/y
nx_pix_per_step = np.int(xy_window[0]*(1 - xy_overlap[0]))
ny_pix_per_step = np.int(xy_window[1]*(1 - xy_overlap[1]))
# Compute the number of windows in x/y
nx_buffer = np.int(xy_window[0]*(xy_overlap[0]))
ny_buffer = np.int(xy_window[1]*(xy_overlap[1]))
nx_windows = np.int((xspan-nx_buffer)/nx_pix_per_step)
ny_windows = np.int((yspan-ny_buffer)/ny_pix_per_step)
# Initialize a list to append window positions to
# Loop through finding x and y window positions
# Note: you could vectorize this step, but in practice
# you'll be considering windows one by one with your
# classifier, so looping makes sense
for ys in range(ny_windows):
for xs in range(nx_windows):
# Calculate window position
startx = xs*nx_pix_per_step + x_start_stop[0]
endx = startx + xy_window[0]
starty = ys*ny_pix_per_step + y_start_stop[0]
endy = starty + xy_window[1]
# Append window position to list
window_list.append(((startx, starty), (endx, endy)))
# Return the list of windows
return window_list
def search_windows(image, windows, clf,scaler):
#1) Create an empty list to receive positive detection windows
on_windows = []
#2) Iterate over all windows in the list
for window in windows:
#3) Extract the test window from original image
test_img = cv2.resize(image[window[0][1]:window[1][1], window[0][0]:window[1][0]], (64, 64))
#4) Extract features for that window using single_img_features()
test_features=[]
color_hist_features1=color_hist(test_img,cspace='HLS',channel=2)
#color_hist_features2=color_hist(test_img,cspace='YUV',channel=1)
hog_features=get_hog_features(test_img,'YCrCb',channel=1,pix_per_cell=16)[0]
spatial_features=bin_spatial(test_img,'YUV')
temp=np.array([])
temp=np.append(temp,color_hist_features1)
#temp=np.append(temp,color_hist_features2)
temp=np.append(temp,hog_features)
temp=np.append(temp,spatial_features)
test_features.append(temp)
#print(test_features)
#5) Scale extracted features to be fed to classifier
#scaler=StandardScaler().fit(test_features)
features = scaler.transform(np.array(test_features).reshape(1, -1))
#print(features)
#6) Predict using your classifier
prediction = clf.predict(features)
#7) If positive (prediction == 1) then save the window
#print(prediction)
if prediction == 1:
on_windows.append(window)
#8) Return windows for positive detections
return on_windows
def draw_boxes(img, bboxes, color=(0, 0, 255), thick=6):
# Make a copy of the image
imcopy = np.copy(img)
# Iterate through the bounding boxes
for bbox in bboxes:
# Draw a rectangle given bbox coordinates
cv2.rectangle(imcopy, bbox[0], bbox[1], color, thick)
# Return the image copy with boxes drawn
return imcopy
###Output
_____no_output_____
###Markdown
Heatmap Since window size is small, hence our classfier predicts 1(Vehicle) for most of the windows that contain some part of the vehicle in it because of this for a vehicle we have different windows marked. But at last we want to show only one bounding box for a vehicle. To overcome this problem we use heatmap. ``add_heat`` function is used to find out which part of image was considered how many times by the classifier that this part has a vehicle. i.e. if the value of a pixel is 10 it means that pixel was included 10 times in such windows for which the prediction was 1.Once we have a heatmap now we can apply thresholding to it so that we have only those regions that have higher probabilities that there is a vehicle in the region. Label Label the obtained detection areas with the ``label()`` function of the scipy.ndimage.measurements package. In this step we outline the boundaries of labels that is, we label each cluster of windows as one car, so in this step we simply get the bounding box of that cluster(vehicle). False Positive FilteringTo filter false positives I ignored all the windows which has dimensions smaller than 30x30, using this I was able to filter out most of the false positives in my output.
###Code
from scipy.ndimage.measurements import label
def add_heat(heatmap, bbox_list):
# Iterate through list of bboxes
for box in bbox_list:
# Add += 1 for all pixels inside each bbox
# Assuming each "box" takes the form ((x1, y1), (x2, y2))
heatmap[box[0][1]:box[1][1], box[0][0]:box[1][0]] += 1
# Return updated heatmap
return heatmap# Iterate through list of bboxes
def apply_threshold(heatmap, threshold):
# Zero out pixels below the threshold
heatmap[heatmap <= threshold] = 0
# Return thresholded map
return heatmap
def draw_labeled_bboxes(img, labels):
# Iterate through all detected cars
for car_number in range(1, labels[1]+1):
# Find pixels with each car_number label value
nonzero = (labels[0] == car_number).nonzero()
# Identify x and y values of those pixels
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Define a bounding box based on min/max x and y
bbox = ((np.min(nonzerox), np.min(nonzeroy)), (np.max(nonzerox), np.max(nonzeroy)))
# False Positve Filtering
if((np.absolute(bbox[0][0]-bbox[1][0])>30) & ( np.absolute(bbox[0][1]-bbox[1][1])>30)):
cv2.rectangle(img, bbox[0], bbox[1], (0,0,255), 6)
# Return the image
return img
###Output
_____no_output_____
###Markdown
Output after using heatmap
###Code
test_imagee=cv2.imread('./test_images/test1.jpg')
windows=[]
windows=slide_window(image,windows,x_start_stop=[200, None], y_start_stop=[400, 500],xy_window=(50,50),xy_overlap=(0.5,0.5))
windows=slide_window(image,windows,x_start_stop=[200, None], y_start_stop=[400, 656],xy_window=(100,80),xy_overlap=(0.7,0.7))
#windows=slide_window(test_imagee,windows,x_start_stop=[200, None], y_start_stop=[500, 650],xy_window=(128,128),xy_overlap=(0.6,0.6))
pickle_input = open("classifier.pkl","rb")
example_dict = pickle.load(pickle_input)
clf1=example_dict[1]
example_dict = pickle.load(pickle_input)
scaler1=example_dict[1]
#clf,scaler=train_and_save()
pickle_input.close()
on_windows=search_windows(test_imagee, windows, clf1,scaler1)
heat=np.zeros_like(test_imagee[:,:,0]).astype(np.float)
heatmap=add_heat(heat,on_windows)
th=apply_threshold(heatmap,0.7)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,8))
ax1.imshow(heatmap,cmap='hot')
ax1.set_title('HeatMap', fontsize=20)
ax2.imshow(th,cmap='hot')
ax2.set_title('heatmap with threshold', fontsize=20)
###Output
_____no_output_____
###Markdown
PipelineI have used a class named ``vehicle_detection`` to keep data from the previous frames. A vehicle will not move more than few pixels in any direction hence we can use data collected from previous frames so that there is small variation in the window size in consecutive frames.The pipeline performs few steps during execution:1. It takes an image as an input and converts it from RGB to BGR color space.2. It calls `slide_window()` function to get different windows.3. It loads the trained classifier and scaler from the pickle file.4. It calls `search_window()` and provides image and windows from step 2 to the function, this function fetches features for that window and feeds them to classifier to get the predicted value.5. It calls the heatmap fucntion to get only a bounding box for each image in an image.6. It keeps the running average of the heatmap values for previous 18 frames, later I used the mean of those values.9. Draw the bounding box and return the image
###Code
class vehicle_detection:
heatmap_average=np.array([])
def pipeline(self,image):
windows=[]
image=cv2.cvtColor(image,cv2.COLOR_RGB2BGR)
windows=slide_window(test_imagee,windows,x_start_stop=[200, None], y_start_stop=[400, 500],xy_window=(50,50),xy_overlap=(0.5,0.5))
windows=slide_window(test_imagee,windows,x_start_stop=[200, None], y_start_stop=[400, 656],xy_window=(100,80),xy_overlap=(0.7,0.7))
pickle_in = open("classifier.pkl","rb")
example_dict = pickle.load(pickle_in)
clf=example_dict[1]
example_dict = pickle.load(pickle_in)
scaler=example_dict[1]
#clf,scaler=train_and_save()
pickle_in.close()
on_windows=search_windows(image, windows, clf,scaler)
#output=draw_boxes(image,on_windows)
heat=np.zeros_like(image[:,:,0]).astype(np.float)
heatmap=add_heat(heat,on_windows)
self.heatmap_average=np.append(self.heatmap_average,heatmap)
if(len(self.heatmap_average)>18*len(np.array(heatmap).ravel())):
self.heatmap_average=self.heatmap_average[len(np.array(heatmap).ravel()):]
#print(len(self.heatmap_average),len(np.array(heatmap).ravel()))
heatmap=np.mean((self.heatmap_average.reshape(-1,len(np.array(heatmap).ravel()))),axis=0)
heatmap=heatmap.reshape(-1,image.shape[1])
#print(heatmap.shape)
heatmap=apply_threshold(heatmap,0.7)
labels = label(heatmap)
output = draw_labeled_bboxes(np.copy(image), labels)
return cv2.cvtColor(output,cv2.COLOR_BGR2RGB)
###Output
_____no_output_____
###Markdown
FInal Output for one frame
###Code
test_imagee=img.imread('./test_images/test1.jpg')
detection=vehicle_detection()
plt.imshow(detection.pipeline(test_imagee))
from moviepy.editor import VideoFileClip
from IPython.display import HTML
white_output = 'project_video_Submission_final.mp4'
detection=vehicle_detection()
clip1 = VideoFileClip("project_video.mp4")
white_clip = clip1.fl_image(detection.pipeline) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
###Output
[MoviePy] >>>> Building video project_video_Submission_final.mp4
[MoviePy] Writing video project_video_Submission_final.mp4
###Markdown
P5: Vehicle Detection This is a pipeline of how algorithm would flow.
###Code
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
from sklearn.preprocessing import StandardScaler
from mpl_toolkits.mplot3d import Axes3D
import glob
from sklearn.model_selection import train_test_split
from scipy.ndimage.measurements import label
print('Regular required imports.')
def color_hist(img,plot_hist=False,ranges=(0,256)):
# making a hist of 3 channels of image (any colorspace)
hist1 = np.histogram(img[:,:,0], bins = 32, range=ranges)
hist2 = np.histogram(img[:,:,1], bins = 32, range=ranges)
hist3 = np.histogram(img[:,:,2], bins = 32, range=ranges)
# calculating the number of bin centers
bin_edges = hist1[1]
bin_centers = (bin_edges[1:] + bin_edges[0:(len(bin_edges)-1)])/2
# Plotting histograms only when asked for
if plot_hist is True:
fig = plt.figure(fig_size = (12,3))
plt.subplot(131)
plt.bar(bin_centers,hist1[0])
plt.xlim(0,256)
plt.title(' Histogram of 1st channel')
plt.subplot(132)
plt.bar(bin_centers,hist2[0])
plt.xlim(0,256)
plt.title(' Histogram of 2nd channel')
plt.subplot(133)
plt.bar(bin_centers,hist3[0])
plt.xlim(0,256)
plt.title(' Histogram of 3rd channel')
hist_features = np.concatenate((hist1[0],hist2[0],hist3[0]))
#hist_features = hist_features.ravel()
return hist_features
print('Color Histogram feature extracted.')
def color_space_explore(img):
def plot3d(pixels, color_rgb, axis_labels=list("RGB"),axis_limits = ((0,255),(0,255),(0,255))):
# Plotting color space in 3d
fig = plt.figure()
ax = Axes3D(fig)
# Axis limits
ax.set_xlim(axis_limits[0])
ax.set_ylim(axis_limits[1])
ax.set_zlim(axis_limits[2])
# Set axis labels and sizes
ax.tick_params(axis='both', which='major', labelsize=14, pad=8)
ax.set_xlabel(axis_labels[0],fontsize=16, labelpad=16)
ax.set_ylabel(axis_labels[1],fontsize=16, labelpad=16)
ax.set_zlabel(axis_labels[2],fontsize=16, labelpad=16)
# Scatter Plot
ax.scatter(pixels[:,:,0].ravel(),
pixels[:,:,1].ravel(),
pixels[:,:,2].ravel(),
c=color_rgb.reshape((-1, 3)),
edgecolors='none')
return ax
#subsampling
scale = max(img.shape[0], img.shape[1], 64) / 64
img_small = cv2.resize(img,(np.int(img.shape[1]/scale),np.int(img.shape[0]/scale)),interpolation=cv2.INTER_NEAREST)
img_small_rgb = cv2.cvtColor(img_small, cv2.COLOR_BGR2RGB)
img_small_HSV = cv2.cvtColor(img_small, cv2.COLOR_BGR2YCrCb)
img_small_RGB = img_small_rgb / 255
# Plotting
plot3d(img_small_rgb, img_small_RGB)
plt.show()
plot3d(img_small_HSV,img_small_RGB, axis_labels=list("YCrCb"))
plt.show()
print("Function for visualization of apt color space.")
# To reduce size for better computation
def spatial_binning(img,reshape_size=(32,32)):
img = cv2.resize(img,reshape_size)
feature_vector = img.ravel()
return feature_vector
print('Spatial Reduction of images done.')
from skimage.feature import hog
def hog_vec(img,orient,pixel_per_cell,cell_per_block,visualise=False):
hog_features,hog_image = hog(img,orientations = orient,
pixels_per_cell = (pixel_per_cell,pixel_per_cell),
cells_per_block=(cell_per_block,cell_per_block),
visualise = True,feature_vector=True,block_norm="L2-Hys")
if visualise is True:
fig = plt.figure(fig_size=(16,16))
plt.subplot(121)
plt.imshow(img,cmap='gray')
plt.title('Original Image')
plt.subplot(122)
plt.imshow(hog_image,cmap='gray')
plt.title('HOG Visualisation')
return hog_features
print('HOG feature vector extracted.')
def comb_feat_vec(img=None,imgs=None, orient=8,pixel_per_cell=8,cell_per_block=2, cspace='RGB'):
# Initialising a feature list
features=[]
if img is None:
# iterating over all the images and extracting the features and appending in the list
for file in imgs:
file_features =[]
img = mpimg.imread(file) # PNG file format (0-1)
if cspace != 'RGB':
if cspace == 'HSV':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
elif cspace == 'LUV':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2LUV)
elif cspace == 'HLS':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
elif cspace == 'YUV':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2YUV)
elif cspace == 'YCrCb':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2YCrCb)
else:
feature_image = np.copy(img)
#Extracting the features after size reduction
spatial_features = spatial_binning(feature_image,reshape_size=(32,32))
## Normalizing the feature
spatial_features = (spatial_features-np.mean(spatial_features))/np.std(spatial_features)
file_features.append(spatial_features)
#Extracting the features from color histogram
color_histogram = color_hist(feature_image,plot_hist=False,ranges=(0,256))
## Normalizing the feature
color_histogram = (color_histogram-np.mean(color_histogram))/np.std(color_histogram)
file_features.append(color_histogram)
# Extracting HOG features
hog_features = hog_vec(feature_image,orient,pixel_per_cell,cell_per_block,visualise=False)
## Normalizing the feature
#hog_features = (hog_features-np.mean(hog_features))/np.std(hog_features)
# Appending all the features in each iteration
file_features.append(hog_features)
features.append(np.concatenate(file_features))
else:
if cspace != 'RGB':
if cspace == 'HSV':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
elif cspace == 'LUV':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2LUV)
elif cspace == 'HLS':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
elif cspace == 'YUV':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2YUV)
elif cspace == 'YCrCb':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2YCrCb)
else:
feature_image = np.copy(img)
#Extracting the features after size reduction
spatial_features = spatial_binning(feature_image,reshape_size=(32,32))
## Normalizing the feature
spatial_features = (spatial_features-np.mean(spatial_features))/np.std(spatial_features)
features.append(spatial_features)
#Extracting the features from color histogram
color_histogram = color_hist(feature_image,plot_hist=False,ranges=(0,256))
## Normalizing the feature
color_histogram = (color_histogram-np.mean(color_histogram))/np.std(color_histogram)
features.append(color_histogram)
# Extracting HOG features
hog_features = hog_vec(feature_image,orient,pixel_per_cell,cell_per_block,visualise=False)
## Normalizing the feature
#hog_features = (hog_features-np.mean(hog_features))/np.std(hog_features)
# Appending all the features in each iteration
features.append(hog_features)
features = np.concatenate(features)
return features
print('Combined all the features.')
######################### MAKE a DT classifier Prunning to merge only good features of these 3 #########################################
def extract_features(imgs, color_space='RGB', spatial_size=(32, 32),
hist_bins=32, orient=8,
pix_per_cell=8, cell_per_block=2, hog_channel=0,
spatial_feat=True, hist_feat=True, hog_feat=True):
# Create a list to append feature vectors to
features = []
# Iterate through the list of images
for file in imgs:
file_features = []
# Read in each one by one
image = mpimg.imread(file)
# apply color conversion if other than 'RGB'
if color_space != 'RGB':
if color_space == 'HSV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
elif color_space == 'LUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2LUV)
elif color_space == 'HLS':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
elif color_space == 'YUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2YUV)
elif color_space == 'YCrCb':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2YCrCb)
else: feature_image = np.copy(img)
if spatial_feat == True:
spatial_features = spatial_binning(feature_image,reshape_size=(32,32))
file_features.append(spatial_features)
if hist_feat == True:
# Apply color_hist()
hist_features = color_hist(feature_image,plot_hist=False,ranges=(0,256))
file_features.append(hist_features)
if hog_feat == True:
# Call get_hog_features() with vis=False, feature_vec=True
if hog_channel == 'ALL':
hog_features = []
for channel in range(feature_image.shape[2]):
hog_features.extend(hog_vec(feature_image[:,:,channel],
orient,pixel_per_cell,cell_per_block,visualise=False))
hog_features = np.ravel(hog_features)
else:
hog_features = hog_vec(feature_image[:,:,hog_channel],orient,pixel_per_cell,cell_per_block,visualise=False)
# Append the new feature vector to the features list
file_features.append(hog_features)
features.append(np.concatenate(file_features))
# Return list of feature vectors
return features
def single_img_features(img, color_space='HSV', spatial_size=(32, 32),
hist_bins=32, orient=8,
pixel_per_cell=8, cell_per_block=2, hog_channel=0,
spatial_feat=True, hist_feat=True, hog_feat=True):
#1) Define an empty list to receive features
img_features = []
#2) Apply color conversion if other than 'RGB'
if color_space != 'RGB':
if color_space == 'HSV':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
elif color_space == 'LUV':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2LUV)
elif color_space == 'HLS':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
elif color_space == 'YUV':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2YUV)
elif color_space == 'YCrCb':
feature_image = cv2.cvtColor(img, cv2.COLOR_RGB2YCrCb)
else: feature_image = np.copy(img)
#3) Compute spatial features if flag is set
if spatial_feat == True:
spatial_features = spatial_binning(feature_image,reshape_size=(32,32))
spatial_features = (spatial_features-np.mean(spatial_features))/np.std(spatial_features)
#4) Append features to list
img_features.append(spatial_features)
#5) Compute histogram features if flag is set
if hist_feat == True:
color_histogram = color_hist(feature_image,plot_hist=False,ranges=(0,256))
color_histogram = (color_histogram-np.mean(color_histogram))/np.std(color_histogram)
#6) Append features to list
img_features.append(color_histogram)
#7) Compute HOG features if flag is set
if hog_feat == True:
if hog_channel == 'ALL':
hog_features = []
for channel in range(feature_image.shape[2]):
hog_features.extend(hog_vec(feature_image[:,:,channel],
orient,pixel_per_cell,cell_per_block,visualise=False))
else:
hog_features = hog_vec(feature_image[:,:,hog_channel],orient,pixel_per_cell,cell_per_block,visualise=False)
#8) Append features to list
img_features.append(hog_features)
#print(np.concatenate(img_features).shape)
#9) Return concatenated array of features
return np.concatenate(img_features)
print('Image feature extracter')
def draw_box(img,bboxes,color=(0, 0, 255), thick=6):
# creating a copy
image = np.copy(img)
# loopoing over all the vertices in bboxes
for bbox in bboxes:
# creating a rectangle
cv2.rectangle(image,bbox[0],bbox[1],color,thick)
return image
print('Rectange drawn.')
import math
def slide_win(img,xy_windows=(64, 64), xy_overlap=(0.75, 0.75)):
# sliding windows only in the lower half of the image
x_start = 0
x_stop = img.shape[1]
y_start = img.shape[0]/2
y_stop = 660
# calculating the total span where window will slide
x_span = x_stop - x_start
y_span = y_stop - y_start
# calculating the number of windows in x-y directions
nxwindows = 1 + ((x_span - xy_windows[0])/(xy_windows[0]*(1-xy_overlap[0])))
nywindows = 1 + ((y_span - xy_windows[1])/(xy_windows[1]*(1-xy_overlap[1])))
#Initialising the window list
window_list = []
for ny in range(np.int(nywindows)):
for nx in range(np.int(nxwindows)):
startx = round(np.int(nx*(np.int((1-xy_overlap[0])*xy_windows[0])) + x_start))
starty = round(np.int(ny*(np.int((1-xy_overlap[1])*xy_windows[1])) + y_start))
endx = round(np.int(startx + xy_windows[0]))
endy = round(np.int(starty + xy_windows[1]))
window_list.append(((startx,starty),(endx,endy)))
return window_list
print('Window will slide in the lower half of the image fed.')
def search_windows(img,windows,clf,x_scaler, orient,pixel_per_cell,cell_per_block, cspace='HSV'):
on_window = []
for window in windows:
# Taking a portion of image and scaling it to 64x64
test_img = cv2.resize(img[window[0][1]:window[1][1],window[0][0]:window[1][0]],(64,64))
# Exctracting features from the test image
features = single_img_features(test_img, color_space=cspace,
orient=orient, pixel_per_cell=pixel_per_cell,
cell_per_block=cell_per_block)
#features = comb_feat_vec(test_img,windows, orient,pixel_per_cell,cell_per_block, cspace='RGB')
# Trasnforming and reshaping into a column vector
test_features = x_scaler.transform(np.array(features).reshape(1,-1))
# Predict through thee established classifier
predictions = clf.predict(test_features)
if predictions == 1:
on_window.append(window)
return on_window
print('Searching for cars.')
#heat_map = np.zeros_like(image[:,:,0]).astype(np.float) ########### To be shifted to main ###########
def heat_map(heat_mapa,box_list):
for box in box_list:
heat_mapa[box[0][1]:box[1][1],box[0][0]:box[1][0]] += 1
return heat_mapa
print('Heat map created')
def heat_threshold(heat_map,threshold,disp = False):
heat_map[heat_map<=threshold] = 0
if disp is True:
plt.imshow(heat_map)
return heat_map
print('False Positives removed')
def heat_box(img, labels):
# Iterate through all detected cars
for car_number in range(1, labels[1]+1):
# Find pixels with each car_number label value
nonzero = (labels[0] == car_number).nonzero()
# Identify x and y values of those pixels
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Define a bounding box based on min/max x and y
bbox = ((np.min(nonzerox), np.min(nonzeroy)), (np.max(nonzerox), np.max(nonzeroy)))
# Draw the box on the image
cv2.rectangle(img, bbox[0], bbox[1], (0,0,255), 6)
# Return the image
return img
print('Box on most hot pixels')
def data_exlore(car_list,noncar_list,disp= False):
# Initialising the data dictionary
data={}
data['cars'] = len(car_list)
data['non_cars'] = len(noncar_list)
car_img = mpimg.imread(car_list[np.random.randint(0,len(car_list))])
noncar_img = mpimg.imread(noncar_list[np.random.randint(0,len(noncar_list))])
if disp is True:
fig = plt.figure()
plt.subplot(121)
plt.imshow(car_img)
plt.title('Car')
plt.subplot(122)
plt.imshow(noncar_img)
plt.title('Not a car')
data['image_shape'] = car_img.shape
return data
print('Data Exploration')
#################################### Preparation of data for training ###################################
far_cars = glob.glob('../Vehicle_detection/vehicles/vehicles/GTI_Far/image*.png') # Images are in .png file format
left_cars = glob.glob('../Vehicle_detection/vehicles/vehicles/GTI_Left/image*.png')
middle_cars = glob.glob('../Vehicle_detection/vehicles/vehicles/GTI_MiddleClose/image*.png')
right_cars = glob.glob('../Vehicle_detection/vehicles/vehicles/GTI_Right/image*.png')
Kitti_cars = glob.glob('../Vehicle_detection/vehicles/vehicles/KITTI_extracted/*.png')
car_list= (far_cars+left_cars+middle_cars+right_cars+Kitti_cars)
#print(len(car_list))
noncars1 = glob.glob('../Vehicle_detection/non_vehicles/non_vehicles/GTI/image*.png')
noncars2 = glob.glob('../Vehicle_detection/non_vehicles/non_vehicles/Extras/extra*.png')
noncar_list = noncars1 + noncars2
#print(len(noncar_list))
training_set = data_exlore(car_list,noncar_list,disp=True)
color_space_explore(mpimg.imread(car_list[10]))
print(training_set['cars'],'Number of cars')
print(training_set['non_cars'],'Number of non-cars')
print(training_set['image_shape'],'Image shape')
############# Extracting features and apt preprocessing
'''car_features = comb_feat_vec(img=None,imgs=car_list, orient=8,pixel_per_cell=8,cell_per_block=2, cspace='RGB')
noncar_features = comb_feat_vec(img=None,imgs=noncar_list[:len(car_list)], orient=8,pixel_per_cell=8,cell_per_block=2, cspace='RGB')
print('Training data')'''
car_features, noncar_features = [],[]
for file in car_list:
# Read in each one by on
image = cv2.imread(file)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
car_feature = single_img_features(image, color_space='YCrCb',
orient=9, pixel_per_cell=16,
cell_per_block=1)
car_features.append(car_feature)
for files in noncar_list:
# Read in each one by one
image = cv2.imread(files)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
noncar_feature =single_img_features(image, color_space='YCrCb',
orient=9, pixel_per_cell=16,
cell_per_block=1)
noncar_features.append(noncar_feature)
print(len(car_features))
print('Combined feature vector created')
# X,Y for splitting and normalization
print(len(car_features))
print(len(car_features[3]))
print(len(noncar_features[0]))
print(len(noncar_features))
X = np.vstack((car_features, noncar_features)).astype(np.float64)
Y = np.hstack((np.ones(len(car_features)), np.zeros(len(noncar_features))))
print(X.shape)
print(Y.shape)
rand_state = np.random.randint(0, 100)
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=rand_state)
print("X and y created and split")
## Normalizing the training/testing data
X_train = np.asarray(X_train)
y_train = np.asarray(y_train)
print(X_train.shape)
print(y_train.shape)
X_scaler = StandardScaler().fit(X_train)
# Apply the scaler to X
X_train = X_scaler.transform(X_train)
X_test = X_scaler.transform(X_test)
print('Data Normalized')
## Making the classifier
from sklearn import svm
from sklearn.svm import LinearSVC
from sklearn import grid_search
from sklearn.metrics import accuracy_score
import time
# Going with Linear SVM
parameters = { 'C':[0.001, 0.01, 0.1]}
svr = LinearSVC()
#svr = svm.SVC(kernel = 'rbf')
clf = grid_search.GridSearchCV(svr, parameters)
# Training the classifier
t1 = time.time()
clf.fit(X_train,y_train)
clf.best_params_
t2 = time.time()
print(round(t2-t1, 2), 'Seconds to train SVC...')
print('Accuracy of this classifier is',round(clf.score(X_test,y_test),4))
from scipy.ndimage.measurements import label
test_img = glob.glob('../Vehicle_detection/test_images/test*.jpg')
# Threshold for heat_maps
threshold = 2
xy_windows=(64, 64)
xy_overlap=(0.75, 0.75)
orient = 9
pixel_per_cell = 16
cell_per_block = 1
imgs, heat_imgs, window_imgs=[], [],[]
for image in test_img:
img = cv2.imread(image) # Files are in jpg format (0-255)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
heat_maps = np.zeros_like(img[:,:,0]).astype(np.float)
#plt.imshow(img)
#print(img.shape)
img_copy = np.copy(img)
#Converting images to (0-1) same as png in training set
#img = img/255 # Report this error
#img=img.astype(np.uint8)
windows = slide_win(img,xy_windows=(64, 64), xy_overlap=xy_overlap)
hot_windows = search_windows(img,windows,clf,X_scaler, orient,pixel_per_cell,cell_per_block, cspace='YCrCb')
window_img = draw_box(img, hot_windows, color=(0, 0, 255), thick=6)
heat = heat_map(heat_maps,hot_windows)
heat_thresh = heat_threshold(heat,threshold)
labels = label(heat_thresh)
heat_box_img = heat_box(img_copy,labels)
fig = plt.figure(figsize=(32,32))
plt.subplot(321)
plt.imshow(img)
plt.title('Image')
plt.subplot(322)
plt.imshow(window_img)
plt.title('Boxes identified')
plt.subplot(323)
plt.imshow(heat, cmap='hot')
plt.title('Heat Map')
plt.subplot(324)
plt.imshow(heat_thresh)
plt.title('After Treshold')
plt.subplot(325)
plt.imshow(labels[0])
plt.title('Number of cars')
plt.subplot(326)
plt.imshow(heat_box_img)
plt.title('Car Positions')
plt.show()
print('Image pipline created')
def process_image(image):
#img = image/np.max(image)
img = image
img_copy = np.copy(img)
heat_maps = np.zeros_like(img[:,:,0]).astype(np.float)
windows = slide_win(img,xy_windows=(64, 64), xy_overlap=(0.75, 0.75))
hot_windows = search_windows(img,windows,clf,X_scaler, orient=9,pixel_per_cell = 16,cell_per_block = 1, cspace='YCrCb')
window_img = draw_box(img, hot_windows, color=(0, 0, 255), thick=6)
heat = heat_map(heat_maps,hot_windows)
heat_thresh = heat_threshold(heat,threshold=2)
labels = label(heat_thresh)
heat_box_img = heat_box(img_copy,labels)
return heat_box_img
print('Pipeline created')
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
from functools import reduce
output_video = '../Vehicle_detection/test_videos_output/test_video_out.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("../Vehicle_detection/test_video.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(output_video, audio=False)
class HeatHistory():
def __init__(self):
self.history = []
def processVideo(inputVideo, outputVideo, frames_to_remember=3, threshhold=1):
"""
Process the video `inputVideo` to find the cars and saves the video to `outputVideo`.
"""
history = HeatHistory()
def pipeline(img):
boxes = findBoxes(img, svc, scaler, params)
img_shape = img.shape
heatmap = add_heat(np.zeros(img_shape), boxes)
if len(history.history) >= frames_to_remember:
history.history = history.history[1:]
history.history.append(heatmap)
heat_history = reduce(lambda h, acc: h + acc, history.history)/frames_to_remember
heatmap = apply_threshold(heat_history, threshhold)
labels = label(heatmap)
return draw_labeled_bboxes(np.copy(img), labels)
myclip = VideoFileClip(inputVideo)
output_video = myclip.fl_image(pipeline)
output_video.write_videofile(outputVideo, audio=False)
processVideo('./videos/project_video.mp4', './video_output/project_video.mp4', threshhold=2)
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
from functools import reduce
output_video = '../Vehicle_detection/test_videos_output/project_video_out.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("../Vehicle_detection/project_video.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(output_video, audio=False)
###Output
[MoviePy] >>>> Building video ../Vehicle_detection/test_videos_output/project_video_out.mp4
[MoviePy] Writing video ../Vehicle_detection/test_videos_output/project_video_out.mp4
###Markdown
Read car images
###Code
# Read in car and non-car images
images = glob.glob('vehicles/*/*/*.png')
cars = []
notcars = []
for image in images:
if 'non-vehicles' in image:
notcars.append(image)
else:
cars.append(image)
print('Found {} cars and {} noncars'.format(len(cars), len(notcars)))
###Output
Found 8792 cars and 8968 noncars
###Markdown
Extracting HOG features and color histograms
###Code
# Define a function to return HOG features and visualization
def get_hog_features(img, orient, pix_per_cell, cell_per_block,
vis=False, feature_vec=True):
# Call with two outputs if vis==True
if vis == True:
features, hog_image = hog(img, orientations=orient,pixels_per_cell=(pix_per_cell, pix_per_cell),cells_per_block=(cell_per_block, cell_per_block), transform_sqrt=True,
visualise=vis, feature_vector=feature_vec)
return features, hog_image
# Otherwise call with one output
else:
features = hog(img, orientations=orient, pixels_per_cell=(pix_per_cell, pix_per_cell),
cells_per_block=(cell_per_block, cell_per_block), transform_sqrt=True,
visualise=vis, feature_vector=feature_vec)
return features
# Define a function to compute binned color features
def bin_spatial(img, size=(32, 32)):
color1 = cv2.resize(img[:,:,0], size).ravel()
color2 = cv2.resize(img[:,:,1], size).ravel()
color3 = cv2.resize(img[:,:,2], size).ravel()
return np.hstack((color1, color2, color3))
# Define a function to compute color histogram features
def color_hist(img, nbins=32, bins_range=(0, 256)):
# Compute the histogram of the color channels separately
channel1_hist = np.histogram(img[:,:,0], bins=nbins, range=bins_range)
channel2_hist = np.histogram(img[:,:,1], bins=nbins, range=bins_range)
channel3_hist = np.histogram(img[:,:,2], bins=nbins, range=bins_range)
# Concatenate the histograms into a single feature vector
hist_features = np.concatenate((channel1_hist[0], channel2_hist[0], channel3_hist[0]))
# Return the individual histograms, bin_centers and feature vector
return hist_features
# Define a function to extract features from a list of images
# Have this function call bin_spatial() and color_hist()
def extract_features(imgs, color_space='RGB', spatial_size=(32, 32),
hist_bins=32, orient=9,
pix_per_cell=8, cell_per_block=2, hog_channel="ALL",
spatial_feat=True, hist_feat=True, hog_feat=True):
# Create a list to append feature vectors to
features = []
# Iterate through the list of images
for file in imgs:
file_features = []
# Read in each one by one
image = mpimg.imread(file)
# apply color conversion if other than 'RGB'
if color_space != 'RGB':
if color_space == 'HSV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
elif color_space == 'LUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2LUV)
elif color_space == 'HLS':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
elif color_space == 'YUV':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2YUV)
elif color_space == 'YCrCb':
feature_image = cv2.cvtColor(image, cv2.COLOR_RGB2YCrCb)
else: feature_image = np.copy(image)
if spatial_feat == True:
spatial_features = bin_spatial(feature_image, size=spatial_size)
file_features.append(spatial_features)
if hist_feat == True:
# Apply color_hist()
hist_features = color_hist(feature_image, nbins=hist_bins)
file_features.append(hist_features)
if hog_feat == True:
# Call get_hog_features() with vis=False, feature_vec=True
if hog_channel == 'ALL':
hog_features = []
for channel in range(feature_image.shape[2]):
hog_features.append(get_hog_features(feature_image[:,:,channel],
orient, pix_per_cell, cell_per_block,
vis=False, feature_vec=True))
hog_features = np.ravel(hog_features)
else:
hog_features = get_hog_features(feature_image[:,:,hog_channel], orient,
pix_per_cell, cell_per_block, vis=False, feature_vec=True)
# Append the new feature vector to the features list
file_features.append(hog_features)
features.append(np.concatenate(file_features))
# Return list of feature vectors
return features
###Output
_____no_output_____
###Markdown
HOG features Car Example
###Code
# Read in the image
image = mpimg.imread(cars[1])
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# Call our function with vis=True to see an image output
features, hog_image = get_hog_features(gray, orient= 15,
pix_per_cell= 8, cell_per_block= 2,
vis=True, feature_vec=False)
cv2.imwrite("output_images/car.png", image*255)
hog_image_rescaled = exposure.rescale_intensity(hog_image, in_range=(0, 0.02))
cv2.imwrite("output_images/hog_car_features.jpg", hog_image_rescaled*255)
# Plot the examples
fig = plt.figure()
plt.subplot(121)
plt.imshow(image, cmap='gray')
plt.title('Example Car Image')
plt.subplot(122)
plt.imshow(hog_image, cmap='gray')
plt.title('HOG Visualization')
plt.show()
###Output
/home/bartdezwaan/anaconda2/envs/mlbook/lib/python3.5/site-packages/skimage/feature/_hog.py:119: skimage_deprecation: Default value of `block_norm`==`L1` is deprecated and will be changed to `L2-Hys` in v0.15
'be changed to `L2-Hys` in v0.15', skimage_deprecation)
###Markdown
HOG features Not Car Example
###Code
# Read in the image
image = mpimg.imread(notcars[1])
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# Call our function with vis=True to see an image output
features, hog_image = get_hog_features(gray, orient= 15,
pix_per_cell= 8, cell_per_block= 2,
vis=True, feature_vec=False)
cv2.imwrite("output_images/notcar.png", image*255)
hog_image_rescaled = exposure.rescale_intensity(hog_image, in_range=(0, 0.02))
cv2.imwrite("output_images/hog_notcar_features.jpg", hog_image_rescaled*255)
# Plot the examples
fig = plt.figure()
plt.subplot(121)
plt.imshow(image, cmap='gray')
plt.title('Example Not Car Image')
plt.subplot(122)
plt.imshow(hog_image, cmap='gray')
plt.title('HOG Visualization')
plt.show()
###Output
/home/bartdezwaan/anaconda2/envs/mlbook/lib/python3.5/site-packages/skimage/feature/_hog.py:119: skimage_deprecation: Default value of `block_norm`==`L1` is deprecated and will be changed to `L2-Hys` in v0.15
'be changed to `L2-Hys` in v0.15', skimage_deprecation)
###Markdown
Training a classifier
###Code
color_space = 'YCrCb' # Can be RGB, HSV, LUV, HLS, YUV, YCrCb
orient = 15 # HOG orientations
pix_per_cell = 16 # HOG pixels per cell
cell_per_block = 2 # HOG cells per block
hog_channel = "ALL" # Can be 0, 1, 2, or "ALL"
spatial_size = (16, 16) # Spatial binning dimensions
hist_bins = 16 # Number of histogram bins
spatial_feat = True # Spatial features on or off
hist_feat = True # Histogram features on or off
hog_feat = True # HOG features on or off
car_features = extract_features(cars, color_space=color_space,
spatial_size=spatial_size, hist_bins=hist_bins,
orient=orient, pix_per_cell=pix_per_cell,
cell_per_block=cell_per_block,
hog_channel=hog_channel, spatial_feat=spatial_feat,
hist_feat=hist_feat, hog_feat=hog_feat)
notcar_features = extract_features(notcars, color_space=color_space,
spatial_size=spatial_size, hist_bins=hist_bins,
orient=orient, pix_per_cell=pix_per_cell,
cell_per_block=cell_per_block,
hog_channel=hog_channel, spatial_feat=spatial_feat,
hist_feat=hist_feat, hog_feat=hog_feat)
# Create an array stack of feature vectors
X = np.vstack((car_features, notcar_features)).astype(np.float64)
# Fit a per-column scaler
X_scaler = StandardScaler().fit(X)
# Apply the scaler to X
scaled_X = X_scaler.transform(X)
# Define the labels vector
y = np.hstack((np.ones(len(car_features)), np.zeros(len(notcar_features))))
# Split up data into randomized training and test sets
rand_state = np.random.randint(0, 100)
X_train, X_test, y_train, y_test = train_test_split(
scaled_X, y, test_size=0.2, random_state=rand_state)
print('Using spatial binning of:',spatial_size,
'and', hist_bins,'histogram bins')
print('Feature vector length:', len(X_train[0]))
# Use a linear SVC
svc = LinearSVC(C=0.001)
# Check the training time for the SVC
t=time.time()
svc.fit(X_train, y_train)
t2 = time.time()
print(round(t2-t, 2), 'Seconds to train SVC...')
# Check the score of the SVC
print('Test Accuracy of SVC = ', round(svc.score(X_test, y_test), 4))
# Check the prediction time for a single sample
t=time.time()
n_predict = 10
print('My SVC predicts: ', svc.predict(X_test[0:n_predict]))
print('For these',n_predict, 'labels: ', y_test[0:n_predict])
t2 = time.time()
print(round(t2-t, 5), 'Seconds to predict', n_predict,'labels with SVC')
def convert_color(img, conv='RGB2YCrCb'):
if conv == 'RGB2YCrCb':
return cv2.cvtColor(img, cv2.COLOR_RGB2YCrCb)
if conv == 'BGR2YCrCb':
return cv2.cvtColor(img, cv2.COLOR_BGR2YCrCb)
if conv == 'RGB2LUV':
return cv2.cvtColor(img, cv2.COLOR_RGB2LUV)
def draw_boxes(img, bboxes, color=(0, 0, 255), thick=6):
# Make a copy of the image
imcopy = np.copy(img)
# Iterate through the bounding boxes
for bbox in bboxes:
# Draw a rectangle given bbox coordinates
cv2.rectangle(imcopy, bbox[0], bbox[1], color, thick)
# Return the image copy with boxes drawn
return imcopy
def find_cars(img, ystart, ystop, scale, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins):
draw_img = np.copy(img)
img = img.astype(np.float32)/255
img_tosearch = img[ystart:ystop,:,:]
ctrans_tosearch = convert_color(img_tosearch, conv='RGB2YCrCb')
if scale != 1:
imshape = ctrans_tosearch.shape
ctrans_tosearch = cv2.resize(ctrans_tosearch, (np.int(imshape[1]/scale), np.int(imshape[0]/scale)))
ch1 = ctrans_tosearch[:,:,0]
ch2 = ctrans_tosearch[:,:,1]
ch3 = ctrans_tosearch[:,:,2]
# Define blocks and steps as above
nxblocks = (ch1.shape[1] // pix_per_cell)-1
nyblocks = (ch1.shape[0] // pix_per_cell)-1
nfeat_per_block = orient*cell_per_block**2
# 64 was the orginal sampling rate, with 8 cells and 8 pix per cell
window = 64
nblocks_per_window = (window // pix_per_cell)-1
cells_per_step = 2 # Instead of overlap, define how many cells to step
nxsteps = (nxblocks - nblocks_per_window) // cells_per_step
nysteps = (nyblocks - nblocks_per_window) // cells_per_step
# Compute individual channel HOG features for the entire image
hog1 = get_hog_features(ch1, orient, pix_per_cell, cell_per_block, feature_vec=False)
hog2 = get_hog_features(ch2, orient, pix_per_cell, cell_per_block, feature_vec=False)
hog3 = get_hog_features(ch3, orient, pix_per_cell, cell_per_block, feature_vec=False)
b_boxes = []
for xb in range(nxsteps):
for yb in range(nysteps):
ypos = yb*cells_per_step
xpos = xb*cells_per_step
# Extract HOG for this patch
hog_feat1 = hog1[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_feat2 = hog2[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_feat3 = hog3[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel()
hog_features = np.hstack((hog_feat1, hog_feat2, hog_feat3))
xleft = xpos*pix_per_cell
ytop = ypos*pix_per_cell
# Extract the image patch
subimg = cv2.resize(ctrans_tosearch[ytop:ytop+window, xleft:xleft+window], (64,64))
# Get color features
spatial_features = bin_spatial(subimg, size=spatial_size)
hist_features = color_hist(subimg, nbins=hist_bins)
# Scale features and make a prediction
test_features = X_scaler.transform(np.hstack((spatial_features, hist_features, hog_features)).reshape(1, -1))
#test_features = X_scaler.transform(np.hstack((shape_feat, hist_feat)).reshape(1, -1))
test_prediction = svc.predict(test_features)
if test_prediction == 1:
xbox_left = np.int(xleft*scale)
ytop_draw = np.int(ytop*scale)
win_draw = np.int(window*scale)
b_boxes.append(((xbox_left, ytop_draw+ystart),(xbox_left+win_draw,ytop_draw+win_draw+ystart)))
return b_boxes
%matplotlib inline
img = mpimg.imread('object_test_images/test3.jpg')
all_boxes = []
b_boxes = find_cars(img, 360, 656, 1.5, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins)
all_boxes = all_boxes + b_boxes
b_boxes = find_cars(img, 360, 656, 1.6, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins)
all_boxes = all_boxes + b_boxes
b_boxes = find_cars(img, 360, 656, 1.8, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins)
all_boxes = all_boxes + b_boxes
bgr = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
bgr = cv2.resize(bgr, (0,0), fx=0.3, fy=0.3)
cv2.imwrite("output_images/test_image.jpg", bgr)
out_img = draw_boxes(img, all_boxes)
bgr = cv2.cvtColor(out_img, cv2.COLOR_RGB2BGR)
bgr = cv2.resize(bgr, (0,0), fx=0.3, fy=0.3)
cv2.imwrite("output_images/test_image_boxed.jpg", bgr)
plt.imshow(out_img)
###Output
/home/bartdezwaan/anaconda2/envs/mlbook/lib/python3.5/site-packages/skimage/feature/_hog.py:119: skimage_deprecation: Default value of `block_norm`==`L1` is deprecated and will be changed to `L2-Hys` in v0.15
'be changed to `L2-Hys` in v0.15', skimage_deprecation)
###Markdown
Filtering false positives and combining overlapping bounding boxes
###Code
from scipy.ndimage.measurements import label
def add_heat(heatmap, bbox_list):
# Iterate through list of bboxes
for box in bbox_list:
# Add += 1 for all pixels inside each bbox
# Assuming each "box" takes the form ((x1, y1), (x2, y2))
heatmap[box[0][1]:box[1][1], box[0][0]:box[1][0]] += 1
# Return updated heatmap
return heatmap
def apply_threshold(heatmap, threshold):
# Zero out pixels below the threshold
heatmap[heatmap <= threshold] = 0
# Return thresholded map
return heatmap
def draw_labeled_bboxes(img, labels):
# Iterate through all detected cars
for car_number in range(1, labels[1]+1):
# Find pixels with each car_number label value
nonzero = (labels[0] == car_number).nonzero()
# Identify x and y values of those pixels
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Define a bounding box based on min/max x and y
bbox = ((np.min(nonzerox), np.min(nonzeroy)), (np.max(nonzerox), np.max(nonzeroy)))
# Draw the box on the image
cv2.rectangle(img, bbox[0], bbox[1], (0,0,255), 6)
# Return the image
return img
test_images = glob.glob('object_test_images/*')
count = 0
for image in test_images:
count = count+1
img = mpimg.imread(image)
all_boxes = []
b_boxes = find_cars(img, 360, 656, 1.4, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins)
all_boxes = all_boxes + b_boxes
b_boxes = find_cars(img, 360, 656, 1.5, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins)
all_boxes = all_boxes + b_boxes
b_boxes = find_cars(img, 360, 656, 1.8, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins)
all_boxes = all_boxes + b_boxes
heat = np.zeros_like(img[:,:,0]).astype(np.float)
add_heat(heat, all_boxes)
heat = apply_threshold(heat,1)
heatmap = np.clip(heat, 0, 255)
labels = label(heatmap)
draw_img = draw_labeled_bboxes(np.copy(img), labels)
plt.figure()
plt.subplot(121)
plt.imshow(heat, cmap='hot')
plt.subplot(122)
plt.imshow(draw_img)
from collections import deque
all_boxes_deque = deque(maxlen=30)
def add_heat_to_video(heatmap, b_boxes_deque):
# Iterate through list of bboxes
for bbox_list in b_boxes_deque:
for box in bbox_list:
# Add += 1 for all pixels inside each bbox
# Assuming each "box" takes the form ((x1, y1), (x2, y2))
heatmap[box[0][1]:box[1][1], box[0][0]:box[1][0]] += 1
# Return updated heatmap
return heatmap
def pipeline(img):
all_boxes = []
b_boxes = find_cars(img, 360, 656, 1.4, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins)
all_boxes = all_boxes + b_boxes
b_boxes = find_cars(img, 360, 656, 1.5, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins)
all_boxes = all_boxes + b_boxes
b_boxes = find_cars(img, 360, 656, 1.8, svc, X_scaler, orient, pix_per_cell, cell_per_block, spatial_size, hist_bins)
all_boxes = all_boxes + b_boxes
all_boxes_deque.append(all_boxes)
heat = np.zeros_like(img[:,:,0]).astype(np.float)
add_heat_to_video(heat, all_boxes_deque)
heat = apply_threshold(heat,15)
heatmap = np.clip(heat, 0, 255)
labels = label(heatmap)
draw_img = draw_labeled_bboxes(np.copy(img), labels)
return draw_img
from moviepy.editor import VideoFileClip
output = 'project_video_output.mp4'
clip1 = VideoFileClip("project_video.mp4")
output_clip = clip1.fl_image(pipeline)
%time output_clip.write_videofile(output, audio=False)
###Output
[MoviePy] >>>> Building video project_video_output.mp4
[MoviePy] Writing video project_video_output.mp4
###Markdown
Vehicle Detection Imports
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
import glob
import time
%matplotlib inline
from skimage.feature import hog
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn import svm
from scipy.ndimage.measurements import label
###Output
_____no_output_____
###Markdown
Classifier Helper Functions
###Code
# Define a function to compute color histogram features
def color_hist(img, nbins=32, bins_range=(0, 256)):
# Compute the histogram of the color channels separately
hist_features = np.histogram(img, bins=nbins, range=bins_range)
#channel2_hist = np.histogram(img[:,:,1], bins=nbins, range=bins_range)
#channel3_hist = np.histogram(img[:,:,2], bins=nbins, range=bins_range)
# Concatenate the histograms into a single feature vector
#hist_features = np.concatenate((channel1_hist[0], channel2_hist[0], channel3_hist[0]))
# Return the individual histograms, bin_centers and feature vector
return hist_features[0]
# Define a function to compute binned color features
def bin_spatial(img, size=(32, 32)):
# Use cv2.resize().ravel() to create the feature vector
features = cv2.resize(img, size).ravel()
# Return the feature vector
return features
# Define a function to return HOG features and visualization
def get_hog_features(img, orient = 9, pix_per_cell = 8, cell_per_block = 2, feature_vec=True):
return hog(img, orientations=orient, pixels_per_cell=(pix_per_cell, pix_per_cell),
cells_per_block=(cell_per_block, cell_per_block), visualise=False, feature_vector=feature_vec,
block_norm="L2-Hys")
def calculateFeatures(img):
features = []
b, g, r = cv2.split(img)
h, l, s = cv2.split(cv2.cvtColor(img, cv2.COLOR_BGR2HLS))
y, cr, cb = cv2.split(cv2.cvtColor(img, cv2.COLOR_BGR2YCrCb))
features.append(bin_spatial(img))
features.append(color_hist(h))
#features.append(color_hist(s))
#features.append(color_hist(y))
features.append(color_hist(cr))
features.append(color_hist(cb))
#features.append(get_hog_features(h))
features.append(get_hog_features(s))
features.append(get_hog_features(y))
#features.append(get_hog_features(b))
#features.append(get_hog_features(g))
#features.append(get_hog_features(r))
return np.concatenate(features)
###Output
_____no_output_____
###Markdown
Create Classifier
###Code
vehicles = glob.glob('data/vehicles/*/*.png')
non_vehicles = glob.glob('data/non-vehicles/*/*.png')
car_features = []
non_car_features = []
t = time.time()
for car in vehicles:
img = cv2.imread(car)
car_features.append(calculateFeatures(img))
for nocar in non_vehicles:
img = cv2.imread(nocar)
non_car_features.append(calculateFeatures(img))
t2 = time.time()
print(round(t2 - t, 2), 'Seconds to load Data.')
t = time.time()
# Create an array stack of feature vectors
X = np.vstack((car_features, non_car_features)).astype(np.float64)
# Define the labels vector
y = np.hstack((np.ones(len(car_features)), np.zeros(len(non_car_features))))
# Split up data into randomized training and test sets
rand_state = np.random.randint(0, 100)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=rand_state)
t2 = time.time()
print(round(t2 - t, 2), 'Seconds to prepare Data.')
t = time.time()
# Fit a per-column scaler
X_scaler = StandardScaler().fit(X_train)
# Apply the scaler to X
X_train = X_scaler.transform(X_train)
X_test = X_scaler.transform(X_test)
t2 = time.time()
print(round(t2 - t, 2), 'Seconds to scale Data.')
#t = time.time()
#pca = PCA(n_components = 1024).fit(X_train) #from about 7000
#X_train = pca.transform(X_train)
#X_test = pca.transform(X_test)
#t2 = time.time()
#print(round(t2 - t, 2), 'Seconds for PCA.')
t = time.time()
#parameters = {'C':[0.1, 1, 5, 10], 'gamma':[0.1, 1, 5, 10]}
#svc = svm.SVC()
#clf = GridSearchCV(svc, parameters)
##clf.best_params_ are {'C': 5, 'gamma': 0.1}
clf = svm.SVC()
clf.fit(X_train, y_train)
t2 = time.time()
print(round(t2 - t, 2), 'Seconds to create Classifier.')
score = clf.score(X_test, y_test)
print('Classifier score:', score)
###Output
109.75 Seconds to load Data.
1.61 Seconds to prepare Data.
2.73 Seconds to scale Data.
202.98 Seconds to create Classifier.
Classifier score: 0.9856418918918919
###Markdown
Pipeline Helper Functions
###Code
def search_image(img, clf, X_scaler, ystart = 380, ystop = 620, cells_per_step = 2, scale = 1):
resized = img[ystart:ystop,:,:]
if scale != 1:
imshape = resized.shape
resized = cv2.resize(resized, (np.int(imshape[1]/scale), np.int(imshape[0]/scale)))
b, g, r = cv2.split(resized)
h, l, s = cv2.split(cv2.cvtColor(resized, cv2.COLOR_BGR2HLS))
y, cr, cb = cv2.split(cv2.cvtColor(resized, cv2.COLOR_BGR2YCrCb))
he, wi = resized.shape[:2]
pix_per_cell = 8
cell_per_block = 2
#blocks and steps as above
nxblocks = (wi // pix_per_cell) - cell_per_block + 1
nyblocks = (he // pix_per_cell) - cell_per_block + 1
# 64 was the orginal sampling rate, with 8 cells and 8 pix per cell
window = 64
nblocks_per_window = (window // pix_per_cell) - cell_per_block + 1
nxsteps = (nxblocks - nblocks_per_window) // cells_per_step + 1
nysteps = (nyblocks - nblocks_per_window) // cells_per_step + 1
# Compute individual channel HOG features for the entire image
hogs = []
#hogs.append(get_hog_features(h, feature_vec=False))
hogs.append(get_hog_features(s, feature_vec=False))
hogs.append(get_hog_features(y, feature_vec=False))
#hogs.append(get_hog_features(b, feature_vec=False))
#hogs.append(get_hog_features(g, feature_vec=False))
#hogs.append(get_hog_features(r, feature_vec=False))
windows = []
for xb in range(nxsteps):
for yb in range(nysteps):
ypos = yb * cells_per_step
xpos = xb * cells_per_step
# Extract features for this patch
xleft = xpos * pix_per_cell
ytop = ypos * pix_per_cell
features = []
subimg = resized[ytop:ytop+window, xleft:xleft+window]
features.append(bin_spatial(subimg))
subh = h[ytop:ytop+window, xleft:xleft+window]
#subs = cv2.resize(s[ytop:ytop+window, xleft:xleft+window], (64,64))
#suby = cv2.resize(y[ytop:ytop+window, xleft:xleft+window], (64,64))
subcr = cr[ytop:ytop+window, xleft:xleft+window]
subcb = cb[ytop:ytop+window, xleft:xleft+window]
features.append(color_hist(subh))
#features.append(color_hist(subs))
#features.append(color_hist(suby))
features.append(color_hist(subcr))
features.append(color_hist(subcb))
for hog in hogs:
features.append(hog[ypos:ypos+nblocks_per_window, xpos:xpos+nblocks_per_window].ravel())
feature = np.concatenate(features)
# Scale features and make a prediction
scaled = X_scaler.transform(feature.reshape(1, -1))
#transformed = pca.transform(scaled)
prediction = clf.predict(scaled)#transformed)
if prediction == 1:
xbox_left = np.int(xleft * scale)
ytop_draw = np.int(ytop * scale)
win_draw = np.int(window * scale)
p1 = (xbox_left, ytop_draw + ystart)
p2 = (xbox_left + win_draw, ytop_draw + win_draw + ystart)
windows.append((p1, p2))
return windows
def create_heatMap(img, clf, X_scaler, threshold = 2):
windows = search_image(img, clf, X_scaler, ystop = 660, scale = 2.0, cells_per_step = 5)
windows += search_image(img, clf, X_scaler, ystop = 580, scale = 1.5, cells_per_step = 3)
windows += search_image(img, clf, X_scaler, ystop = 540, scale = 1.1, cells_per_step = 4)
heatmap = np.zeros(img.shape[:2], dtype = np.uint8)
for w in windows:
heatmap[w[0][1]:w[1][1], w[0][0]:w[1][0]] += 1
heatmap[heatmap <= threshold] = 0
return heatmap
def draw_labeled_bboxes(img, heatmap, color = (255, 0, 0)):
labels = label(heatmap)
# Iterate through all detected cars
for car_number in range(1, labels[1] + 1):
# Find pixels with each car_number label value
nonzero = (labels[0] == car_number).nonzero()
# Identify x and y values of those pixels
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Define a bounding box based on min/max x and y
bbox = ((np.min(nonzerox), np.min(nonzeroy)), (np.max(nonzerox), np.max(nonzeroy)))
# Draw the box on the image
cv2.rectangle(img, bbox[0], bbox[1], color, 4)
# Return the image
return img
###Output
_____no_output_____
###Markdown
Pipeline
###Code
from Lanes import pipeline as lanepipe
def pipeline(img):
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
hm = create_heatMap(img, clf, X_scaler)
return cv2.cvtColor(draw_labeled_bboxes(img, hm), cv2.COLOR_BGR2RGB)
def pipeline_with_lanes(img):
bgr = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
hm = create_heatMap(bgr, clf, X_scaler)
lanes = lanepipe(img, state)
return draw_labeled_bboxes(lanes, hm, (0, 0, 255))
###Output
_____no_output_____
###Markdown
Using the Pipeline
###Code
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
output = 'output_images/project_video.mp4'
clip1 = VideoFileClip("project_video.mp4")
white_clip = clip1.fl_image(pipeline) #NOTE: this function expects color images!!
%time white_clip.write_videofile(output, audio=False)
from Lanes import State
state = State()
output = 'output_images/project_video_lanes.mp4'
clip1 = VideoFileClip("project_video.mp4")
white_clip = clip1.fl_image(pipeline_with_lanes) #NOTE: this function expects color images!!
%time white_clip.write_videofile(output, audio=False)
###Output
[MoviePy] >>>> Building video output_images/project_video_lanes.mp4
[MoviePy] Writing video output_images/project_video_lanes.mp4
###Markdown
Video Pipeline
###Code
def Pipeline(image, n_frames=15, threshold=20):
window_temp =[]
global windows_list
global windows
global hot_windows_final
windows = windows_Detection(image, windows, draw=False)
windows_list.append(windows)
if len(windows_list) <= n_frames:
hot_windows_final = sum(windows_list, []) # Add windows from all available frames
else:
for val in windows_list[(len(windows_list) - n_frames -1) : (len(windows_list)-1)]:
window_temp.append(val)
hot_windows_final = sum(window_temp, [])
frame_heatmap = np.zeros_like(image[:,:,0])
frame_heatmap = add_heat(frame_heatmap, hot_windows_final)
frame_heatmap = apply_threshold(frame_heatmap, threshold)
labels = label(frame_heatmap)
draw_img = draw_labeled_bboxes(np.copy(image), labels)
#plt.imshow(draw_img)
return draw_img
windows_list = []
import moviepy
from moviepy.editor import VideoFileClip
video_output1 = 'Project_Output.mp4'
video_input2 = VideoFileClip('project_video.mp4')#.subclip(35, 42)
processed_video = video_input2.fl_image(Pipeline)
%time processed_video.write_videofile(video_output1, audio=False)
video_input2.reader.close()
video_input2.audio.reader.close_proc()
###Output
[MoviePy] >>>> Building video Project_Output.mp4
[MoviePy] Writing video Project_Output.mp4
|
notebook/04-Mnist_keras_baseline.ipynb | ###Markdown
Import our utils functions
###Code
import src.utils.mnist_utils as mnist_utils
import src.utils.ml_utils as ml_utils
import src.utils.tensorflow_helper as tensorflow_helper
import src.model_mnist_v1.trainer.model as mnist_v1
import importlib
importlib.reload(mnist_utils)
importlib.reload(ml_utils)
importlib.reload(mnist_v1)
importlib.reload(tensorflow_helper);# to reload the function and mask the output
###Output
_____no_output_____
###Markdown
Set plots style
###Code
print(plt.style.available)
plt.style.use('seaborn-ticks')
###Output
_____no_output_____
###Markdown
Input Data Load the data
###Code
# load the data: path is relative to the python path!
(x_train, y_train), (x_test, y_test) = mnist_utils.load_data(path='data/mnist/raw/mnist.pkl.gz')
###Output
_____no_output_____
###Markdown
Basics checks
###Code
# check data shape (training)
x_train.shape
# check data shape (train)
x_test.shape
x_train.dtype, x_test.dtype
np.max(x_train), np.min(x_train), np.max(x_test), np.min(x_test)
###Output
_____no_output_____
###Markdown
Size of the data
###Code
print('{0:.2f} Mb'.format(x_test.nbytes/1024.0**2))
print('{0:.2f} Mb'.format(x_train.nbytes/1024.0**2))
print('{0:.2f} Mb'.format(y_test.nbytes/1024.0**2))
print('{0:.2f} Mb'.format(y_train.nbytes/1024.0**2))
###Output
0.06 Mb
###Markdown
Saving the data as pickle files
###Code
path_train='data/mnist/numpy_train/'
path_test='data/mnist/numpy_test/'
cPickle.dump(x_train, open(path_train+'x_train.pkl', 'wb'))
cPickle.dump(y_train, open(path_train+'y_train.pkl', 'wb'))
cPickle.dump(x_test, open(path_test+'x_test.pkl', 'wb'))
cPickle.dump(y_test, open(path_test+'y_test.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
Visualize the data Some example from training dataset
###Code
mnist_utils.plot_mnist_images(x_train[0:25], y_train[0:25])
###Output
_____no_output_____
###Markdown
Some example from testing dataset
###Code
mnist_utils.plot_mnist_images(x_test[0:25], y_test[0:25])
###Output
_____no_output_____
###Markdown
Set parameters
###Code
tf.logging.set_verbosity(tf.logging.INFO)
# number of classes
NUM_CLASSES =10
# dimension of the input data
DIM_INPUT = 784
# number of epoch to train our model
EPOCHS = 10
# size of our mini batch
BATCH_SIZE = 128
# shuffle buffer size
SHUFFLE_BUFFER_SIZE = 10 * BATCH_SIZE
# prefetch buffer size
PREFETCH_BUFFER_SIZE = tf.contrib.data.AUTOTUNE
# number of paralell calls
NUM_PARALELL_CALL = 4
# model version
MODEL='v1'
###Output
_____no_output_____
###Markdown
Defined flags
###Code
tensorflow_helper.del_all_flags(tf.flags.FLAGS)
# just for jupyter notebook and avoir : "UnrecognizedFlagError: Unknown command line flag 'f'"
tf.app.flags.DEFINE_string('f', '', 'kernel')
# path to store the model and input for Tensorboard
tf.app.flags.DEFINE_string('model_dir_keras', './results/Models/Mnist/tf_1_12/keras/'+MODEL+'/ckpt/', 'Dir to save a model and checkpoints with keras')
tf.app.flags.DEFINE_string('tensorboard_dir_keras', './results/Models/Mnist/tf_1_12/keras/'+MODEL+'/logs/', 'Dir to save logs for TensorBoard with keras')
# parameters for the input dataset and train the model
tf.app.flags.DEFINE_integer('epoch', EPOCHS, 'number of epoch')
tf.app.flags.DEFINE_integer('step_per_epoch', len(x_train) // BATCH_SIZE, 'number of step per epoch')
tf.app.flags.DEFINE_integer('batch_size', BATCH_SIZE, 'Batch size')
tf.app.flags.DEFINE_integer('shuffle_buffer_size', SHUFFLE_BUFFER_SIZE , 'Shuffle buffer size')
tf.app.flags.DEFINE_integer('prefetch_buffer_size', PREFETCH_BUFFER_SIZE, 'Prefetch buffer size')
tf.app.flags.DEFINE_integer('num_parallel_calls', NUM_PARALELL_CALL, 'Number of paralell calls')
# parameters for the model
tf.app.flags.DEFINE_integer('num_classes', NUM_CLASSES, 'number of classes in our model')
tf.app.flags.DEFINE_integer('dim_input', DIM_INPUT, 'dimension of the input data for our model')
FLAGS = tf.app.flags.FLAGS
###Output
_____no_output_____
###Markdown
print(FLAGS) pre defined flagstf.estimator.ModeKeys.EVAL, tf.estimator.ModeKeys.PREDICT, tf.estimator.ModeKeys.TRAIN
###Code
## Input dataset
Use tf.data.dataset to feed the Keras model
### Input dataset functions for training
Load, convert, preprocess and reshuffle the images and labels
###Output
_____no_output_____
###Markdown
training_dataset = mnist_v1.input_mnist_array_dataset_fn(x_train, y_train, FLAGS, mode=tf.estimator.ModeKeys.TRAIN, batch_size=FLAGS.batch_size)
###Code
### Input dataset functions for testing
Load, convert, preprocess and reshuffle the images and labels
###Output
_____no_output_____
###Markdown
testing_dataset = mnist_v1.input_mnist_array_dataset_fn(x_test, y_test, FLAGS, mode=tf.estimator.ModeKeys.EVAL, batch_size=len(x_test))
###Code
### Printing the number relater to the number of events (epoch, batch size, ...)
###Output
_____no_output_____
###Markdown
def print_summary_input(data, step='training'): print('Summary for the {} dataset:'.format(step)) if step=='training': print(' - number of epoch :', FLAGS.epoch) print(' - number of events per epoch :', len(data)) print(' - batch size :', FLAGS.batch_size) print(' - number of step per epoch :', FLAGS.step_per_epoch) print(' - total number of steps :', FLAGS.epoch * FLAGS.step_per_epoch) else: print(' - number of epoch :', 1) print(' - number of events per epoch :', len(data)) print(' - batch size :', None) print(' - number of step per epoch :', 1) print(' - total number of steps :', 1) print_summary_input(x_train) print_summary_input(x_test, 'testing')
###Code
## Build the Machine Learning model using Keras
### Build the model
###Output
_____no_output_____
###Markdown
print('trained model will be saved here:\n',FLAGS.model_dir_keras) deleting the folder from previous try shutil.rmtree(FLAGS.model_dir_keras, ignore_errors=True) def baseline_model(opt='tf'): create model model = tf.keras.Sequential() hidden layer model.add(tf.keras.layers.Dense(512, input_dim=FLAGS.dim_input, kernel_initializer=tf.keras.initializers.he_normal(), bias_initializer=tf.keras.initializers.Zeros(), activation='relu')) model.add(tf.keras.layers.Dropout(0.2)) model.add(tf.keras.layers.Dense(512, kernel_initializer=tf.keras.initializers.he_normal(), bias_initializer=tf.keras.initializers.Zeros(), activation='relu')) model.add(tf.keras.layers.Dropout(0.2)) last layer model.add(tf.keras.layers.Dense(FLAGS.num_classes, kernel_initializer=tf.keras.initializers.he_normal(), bias_initializer=tf.keras.initializers.Zeros(), activation='softmax')) weight initialisation He: keras.initializers.he_normal(seed=None) Xavier: keras.initializers.glorot_uniform(seed=None) Radom Normal: keras.initializers.RandomNormal(mean=0.0, stddev=0.05, seed=None) Truncated Normal: keras.initializers.TruncatedNormal(mean=0.0, stddev=0.05, seed=None) if opt=='keras': optimiser=tf.keras.optimizers.Adam(lr=0.01, beta_1=0.9, epsilon=1e-07) GD/SGC: keras.optimizers.SGD(lr=0.01, momentum=0.0, decay=0.0, nesterov=False) Adam: keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False) RMSProp: keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=None, decay=0.0) Momentum: keras.optimizers.SGD(lr=0.01, momentum=0.9, decay=0.0, nesterov=False) else: optimiser (use tf.train and not tf.keras to use MirrorStrategy) https://www.tensorflow.org/api_docs/python/tf/train/Optimizer optimiser=tf.train.AdamOptimizer(learning_rate=0.01, beta1=0.9, epsilon=1e-07) GD/SGC: tf.train.GradientDescentOptimizer(learning_rate, use_locking=False, name='GradientDescent') Adam: tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False,name='Adam') RMSProp: tf.train.RMSPropOptimizer(learning_rate, decay=0.9, momentum=0.0, epsilon=1e-10, use_locking=False, centered=False, name='RMSProp') Momentum: tf.train.MomentumOptimizer(learning_rate, momentum, use_locking=False, name='Momentum', use_nesterov=False) Compile model model.compile(loss='categorical_crossentropy', optimizer=optimiser, metrics=['accuracy']) return model reset the modeltf.keras.backend.clear_session() build the modelmodel_opt_keras = baseline_model(opt='keras') store the origina weightsinitial_weights = model_opt_keras.get_weights()
###Code
### Check the nuber of parameters
###Output
_____no_output_____
###Markdown
model_opt_keras.summary()
###Code
### Check input and output layer names
###Output
_____no_output_____
###Markdown
model_opt_keras.input_names Use this name as the dictionary key in the TF input function model_opt_keras.output_names
###Code
## Adding some actions during the training
We use for that call back with Keras
### TensorBoard
###Output
_____no_output_____
###Markdown
print('Tensorflow logs will be saved here:\n',FLAGS.tensorboard_dir_keras) look a the list of existing filesfor file in glob.glob(FLAGS.tensorboard_dir_keras+'*'): print(re.findall(r'[^\\/]+|[\\/]',file)[-1]) remove the filesshutil.rmtree(FLAGS.tensorboard_dir_keras ,ignore_errors=True) tbCallBack=tf.keras.callbacks.TensorBoard(log_dir=FLAGS.tensorboard_dir_keras, histogram_freq=1, write_graph=True)
###Code
## Training the model
We use Keras and feed data to our model using tf.data.dataset
- **batch_size** determines the number of samples in each mini batch. Its maximum is the number of all samples, which makes gradient descent accurate, the loss will decrease towards the minimum if the learning rate is small enough, but iterations are slower. Its minimum is 1, resulting in stochastic gradient descent: Fast but the direction of the gradient step is based only on one example, the loss may jump around. batch_size allows to adjust between the two extremes: accurate gradient direction and fast iteration. Also, the maximum value for batch_size may be limited if your model + data set does not fit into the available (GPU) memory.
- **steps_per_epoch** the number of batch iterations before a training epoch is considered finished. If you have a training set of fixed size you can ignore it but it may be useful if you have a huge data set or if you are generating random data augmentations on the fly, i.e. if your training set has a (generated) infinite size. If you have the time to go through your whole training data set I recommend to skip this parameter.
- **validation_steps** similar to steps_per_epoch but on the validation data set instead on the training data. If you have the time to go through your whole validation data set I recommend to skip this parameter.
### Fit the model using Keras and tf.data.dataset
###Output
_____no_output_____
###Markdown
%%time set to the original weights for testing other pipelinesmodel_opt_keras.set_weights(initial_weights) fit the model (using data.Dataset)history=model_opt_keras.fit(training_dataset.make_one_shot_iterator(), use training dataset steps_per_epoch=FLAGS.step_per_epoch, number of train step per epoch validation_data=testing_dataset.make_one_shot_iterator(), use testing dataset validation_steps=1, number of test step per epoch callbacks=[tbCallBack], activate TensorBoard epochs=FLAGS.epoch, number of epoch for training verbose = 1)
###Code
### Monitoring using TensorBoard
###Output
_____no_output_____
###Markdown
start in a separte shell with the env activated:cd your the working dir pof the project
###Code
# copy the following in the shell
'tensorboard --logdir '+'"'+FLAGS.tensorboard_dir_keras+'"'
###Output
_____no_output_____
###Markdown
Validation plot after training using Keras history output
###Code
mnist_utils.plot_acc_loss(history, FLAGS.epoch)
###Output
Loss:
- loss [training dataset]: 0.198
- loss [validation dataset: 0.173
Accuracy:
- accuracy [training dataset]: 95.40%
- accuracy [validation dataset: 96.35%
###Markdown
Save the model using Keras
###Code
# with keras optimiser we can save the model+weight
if not os.path.exists(FLAGS.model_dir_keras):
os.makedirs(FLAGS.model_dir_keras)
model_opt_keras.save(FLAGS.model_dir_keras+'keras_model.h5')
###Output
_____no_output_____
###Markdown
Validation plot after training using Tensorboard output file
###Code
debug=True
history=ml_utils.load_data_tensorboard(FLAGS.tensorboard_dir_keras)
if debug:
print('\n')
for file in glob.glob(FLAGS.tensorboard_dir_keras):
print(re.findall(r'[^\\/]+|[\\/]',file)[-1])
print('\n')
print(history.keys())
print('number of entry for train:', len(history['batch_loss']))
print('number of entry for eval:', len(history['epoch_val_loss'][0]))
print('\n\n\n')
ml_utils.plot_acc_loss(history['epoch_loss'][0], history['epoch_loss'][1],
history['epoch_acc'][0], history['epoch_acc'][1],
history['epoch_val_loss'][0], history['epoch_val_loss'][1],
history['epoch_val_acc'][0], history['epoch_val_acc'][1])
debug=True
history=ml_utils.load_data_tensorboard(FLAGS.tensorboard_dir_keras)
if debug:
print('\n')
for file in glob.glob(FLAGS.tensorboard_dir_keras):
print(re.findall(r'[^\\/]+|[\\/]',file)[-1])
print('\n')
print(history.keys())
print('number of entry for train:', len(history['batch_loss']))
print('number of entry for eval:', len(history['epoch_val_loss'][0]))
print('\n\n\n')
ml_utils.plot_acc_loss(history['batch_loss'][0], history['batch_loss'][1],
history['batch_acc'][0], history['batch_acc'][1],
None, None,
None, None)
###Output
INFO:tensorflow:No path found after ./results/Models/Mnist/tf_1_12/keras/v1/logs/events.out.tfevents.1554566900.Fabien-Tarrades-MacBook-Pro.local
/
dict_keys(['batch_acc', 'batch_loss', 'epoch_acc', 'epoch_loss', 'epoch_val_acc', 'epoch_val_loss'])
number of entry for train: 2
number of entry for eval: 10
Loss:
- loss [training dataset]: 0.200
Accuracy:
- accuracy [training dataset]: 95.31%
###Markdown
Checking Tensorboard input files
###Code
history_test=ml_utils.load_data_tensorboard(FLAGS.tensorboard_dir_keras)
print(history_test)
history_test.keys()
###Output
_____no_output_____
###Markdown
MNIST images classification using Keras: baseline1- MNIST dataset in memory 2- Feed data using tf.data.dataset API 3- Model using tf.keras API 4- Local training and testing using tf.keras API 5- Use TensorBoard to monitor training 6- Monitor loss and accuracy7- Save the model Install packages on Google Cloud Datalab (locally use conda env) Select in the Python3 Kernel:In the menu bar the of 'Kernel', select **python3** Install needed packagescopy the command below in a Google Cloud Datalab cell **!pip install tensorflow==1.12** Restart the Kernel this is to take into account the new installed packages. Click in the menu bar on: **Reset Session** Include paths to our functions
###Code
import sys
import os
import pathlib
workingdir=os.getcwd()
print(workingdir)
d=[d for d in os.listdir(workingdir)]
n=0
while not set(['notebook']).issubset(set(d)):
workingdir=str(pathlib.Path(workingdir).parents[0])
print(workingdir)
d=[d for d in os.listdir(str(workingdir))]
n+=1
if n>5:
break
sys.path.insert(0, workingdir)
os.chdir(workingdir)
###Output
/Users/tarrade/Desktop/Work/Data_Science/Tutorials_Codes/Python/proj_DL_models_and_pipelines_with_GCP/notebook
/Users/tarrade/Desktop/Work/Data_Science/Tutorials_Codes/Python/proj_DL_models_and_pipelines_with_GCP
###Markdown
Setup librairies import and plots style Import librairies
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import sys
import _pickle as cPickle
import shutil
import time
import glob
import re
print(tf.__version__)
print(tf.keras.__version__)
###Output
1.12.0
2.1.6-tf
|
solutions/2018/kws/02-1-kws.ipynb | ###Markdown
December 02 - Parts 1 and 2https://adventofcode.com/2018/day/2
###Code
input_value = input("Please enter the puzzle text here, then press ENTER")
input_words = input_value.split()
sample_input = [
"abcdef",
"bababc",
"abbcde",
"abcccd",
"aabcdd",
"abcdee",
"ababab"
]
# Pick the list you want to work with
words = sample_input
# words = input_words
###Output
_____no_output_____
###Markdown
The puzzle asks us to come up count the number of words where one or more letters occurs twice, and similarly where they appear three times. To do this, we need to count letter frequencies in a word. In python, strings are lists of characters, so this problem is the same as counting occurences of items in lists. A quick google reveals the following suggestions:https://stackoverflow.com/questions/2600191/how-to-count-the-occurrences-of-a-list-itemThe Counter approach sounds promising
###Code
from collections import Counter # We would normally place imports the top of the document, but they work inline too
Counter("abbcde")
###Output
_____no_output_____
###Markdown
That's very close to what we need, but we're not so much interested that b occurs twice, but just that there is a two in the values. Counter is a `dict` object - and with any dict we can access the values:
###Code
Counter("abbcde").values()
###Output
_____no_output_____
###Markdown
That's perfect - now we just need to see if 2 and 3 occurs in the values. Since it's fun, we're going to make it a bit more generic and sum up all occurences.
###Code
multi_char_count = {}
for word in words:
# Count how many times characters occur
word_counter = Counter(word).values()
# Now count how many times single, double, triple etc characters occur:
char_occur_counter = Counter(word_counter)
for occur, occur_count in char_occur_counter.items():
multi_char_count[occur] = multi_char_count.get(occur, 0) + 1 # Use dict.get() as we can supply default value
for k in multi_char_count.keys():
print("{} occurs {} times.".format(k, multi_char_count[k]))
print("The checksum product for 2 and 3 is {}.".format(multi_char_count[2] * multi_char_count[3]))
###Output
_____no_output_____
###Markdown
Part 2
###Code
sample_input_part2 = [
"abcde",
"fghij",
"klmno",
"pqrst",
"fguij",
"axcye",
"wvxyz"
]
# Pick the list you want to work with
words = sample_input_part2
# words = input_words
###Output
_____no_output_____
###Markdown
In this second part, we need to calculate the differences between words. We will create a helper function to do this.It's also worth noting that position is important: we don't want to remove all 'a's everwhere - we want to remove only if the same characters in the first position, the second position and so on. Let's just loop through the string, character by character, as it is very simple to follow:
###Code
def word_diff(word_1, word_2):
""" Compare two words and return a word with only the matching characters """
word_result = [] # Will hold the letters that match
# We can loop over strings like we do over lists
for ix, char_1 in enumerate(word_1):
if char_1 == word_2[ix]:
word_result.append(char_1)
# For the result, we 'join' the array https://docs.python.org/3/library/stdtypes.html#str.join
return "".join(word_result)
# Let's test it
word_diff("fghij", "fguij")
# In the outer loop we enumerate the list, meaning we both get the list index (0,1,2 etc) and the word
# We will use this index in the inner loop to avoid double comparisons
for ix, word_1 in enumerate(words):
# For the inner list we use the index to slice the word list
# so we don't compare the bits we have done in the outer loop
# words[ix+1:] means only the parts of the list from point ix+1 until the end
# Runing on [a,b,c] means a will be compared to [b,c], b will be compared to [c]
# and c won't be compared to anything as it's already been compared to both a and b
for word_2 in words[ix+1:]:
diff = word_diff(word_1, word_2)
# The word we are looking for should be only one character shorter than the input
if len(diff) == len(word_1) - 1:
print(word_1, word_2, diff)
###Output
_____no_output_____ |
Tutorial2/Evaluation with scikit-learn.ipynb | ###Markdown
Evaluation with scikit-learn **CS5483 Data Warehousing and Data Mining**___ Before we begin coding, it is useful to execute the [line magic](https://ipython.readthedocs.io/en/stable/interactive/magics.htmlline-magics) to initialize the environment, and import the libraries necessary for the notebook.
###Code
%reset -f
%matplotlib inline
import numpy as np
from IPython import display
from ipywidgets import interact, IntSlider
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split, cross_val_predict, cross_val_score, StratifiedKFold
from sklearn import tree
from functools import lru_cache
###Output
_____no_output_____
###Markdown
Data Preparation About the dataset We will use a popular dataset called the [*iris dataset*](https://en.wikipedia.org/wiki/Iris_flower_data_set). Iris is a flower with three different species shown below. Iris Setosa Iris VersicolorIris Virginica The three iris species differ in the lengths and widths of their *petals* and *sepals*. A standard data mining task is to train a model that can classify the spieces (*target*) automatically based on the lengths and widths of the petals and sepals (*input features*). Load dataset from scikit-learn **How to load the iris dataset?** To load the iris dataset, we can simply import the [`sklearn.datasets` package](https://scikit-learn.org/stable/datasets/index.html).
###Code
from sklearn import datasets
iris = datasets.load_iris()
type(iris) # object type
###Output
_____no_output_____
###Markdown
`sklearn` stores the dataset as a [`Bunch` object](https://scikit-learn.org/stable/modules/generated/sklearn.utils.Bunch.html), which is essentially [a bunch of properties](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html) put together. **How to learn more about a library?** Detailed documentation can be found at .- The following `IFrame` object embeds the website as an HTML iframe in jupyter notebook. - The class `IFrame` is available after importing the `display` module from `IPython` package first.
###Code
from IPython import display
display.IFrame(src="https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html", width=800, height=600)
###Output
_____no_output_____
###Markdown
We can use the symbol `?` to obtain the docstring of an object and `??` to obtain its source code, if available.
###Code
?datasets.load_iris
??datasets.load_iris
###Output
_____no_output_____
###Markdown
**How to learn more about the dataset?** The property `DESCR` (description) is a string that contains some background information of the dataset:
###Code
print(iris.DESCR)
###Output
_____no_output_____
###Markdown
All the properties of an object can be listed using the built-in function `dir` (directory):
###Code
dir(iris)
###Output
_____no_output_____
###Markdown
**How to show the data?** The properties `data` and `target` contains the data values.
###Code
type(iris.data), type(iris.target)
###Output
_____no_output_____
###Markdown
The data are stored as `numpy` array, which is a powerful data type optimized for performance. It provides useful properties and methods to describe and process the data:
###Code
iris.data.shape, iris.data.ndim, iris.data.dtype
###Output
_____no_output_____
###Markdown
`iris.data` is a 150-by-4 2-dimensional array of 64-bit floating-point numbers.- 150 corresponds to the number of instances, while- 4 corresponds to the number of input attributes. To show the input feature names:
###Code
iris.feature_names
###Output
_____no_output_____
###Markdown
To show the means and standard deviations of the input features:
###Code
iris.data.mean(axis=0), iris.data.std(axis=0)
###Output
_____no_output_____
###Markdown
All the public properties/methods of `numpy` array are printed below:
###Code
import numpy as np
print(*(attr for attr in dir(np.ndarray)
if attr[0] != '_')) # private attributes begin with underscore
###Output
_____no_output_____
###Markdown
The above imports `numpy` and rename it as `np` for convenience. **What is the target feature?** The target variable of the iris dataset is the flower type, whose names are stored by the following property:
###Code
iris.target_names
###Output
_____no_output_____
###Markdown
`iris.target` is an array of integer indices from `{0, 1, 2}` for the three classes.
###Code
iris.target
###Output
_____no_output_____
###Markdown
**Exercise** Fill the following cell with a tuple of the following properties for the target (instead of input features) of the iris dataset:- shape, - number of dimensions, and - the data types of the values.Your solution should look like:```Pythoniris.___.___, iris.___.___, iris.___.___```
###Code
# YOUR CODE HERE
raise NotImplementedError()
# tests
shape, ndim, dtype = Out[len(In)-2] # retrieve the last output as the answer,
# so execute this only after executing your solution cell
assert isinstance(shape, tuple) and isinstance(ndim, int) and isinstance(dtype, np.dtype)
###Output
_____no_output_____
###Markdown
**Exercise** Fill in the following cell with a tuple of- the list of minimum values of the input features, and- the list of maximum values of the input features.You answer should look like:```Pythoniris.___.___(axis=0), iris.___.___(axis=0)```
###Code
# YOUR CODE HERE
raise NotImplementedError()
# tests
feature_min, feature_max = Out[len(In) - 2]
assert feature_min.shape == (4, ) == feature_max.shape
###Output
_____no_output_____
###Markdown
Create pandas DataFrame The [package `pandas`](https://pandas.pydata.org/docs/user_guide/index.html) provides additional tools to display and process a dataset. First, we translate the `Bunch` object into a `pandas` [`DataFrame` object](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html?highlight=dataframepandas.DataFrame).
###Code
import pandas as pd
# write the input features first
iris_df = pd.DataFrame(data=iris.data, columns=iris.feature_names)
# append the target values to the last column
iris_df['target'] = iris.target
iris_df # to display the DataFrame
###Output
_____no_output_____
###Markdown
In jupyter notebook, a `DataFrame` object is conveniently displayed as an HTML table, so there is no need `print` it. We can control how much information to show by setting the [display options](https://pandas.pydata.org/pandas-docs/stable/user_guide/options.html). We can also display the statistics of different numerical attributes using the method `describe` and `boxplot`.
###Code
iris_df.describe()
%matplotlib inline
iris_df.boxplot(figsize=(10,5)) # figsize specifies figure (width,height) in inches
###Output
_____no_output_____
###Markdown
The line magic [`%matplotlib`](https://ipython.readthedocs.io/en/stable/interactive/magics.htmlmagic-matplotlib) specifies where the plot should appear. **How to handle nominal class attribute?** Note that the boxplot also covers the target attribute, but it should not. (Why?) Let's take a look at the current datatypes of the different attributes.
###Code
print(iris_df.dtypes)
###Output
_____no_output_____
###Markdown
The target is regarded as a numeric attribute with type integer `int64`. Instead, the target should be categorical with only allow three possible values, one for each iris species. To fix this, we can use the `astype` method to convert the data type automatically. (More details [here](https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.htmlseries-creation.section).)
###Code
iris_df.target = iris_df.target.astype('category')
iris_df.boxplot(figsize=(10,5)) # target is not plotted as expected
iris_df.target.dtype
###Output
_____no_output_____
###Markdown
We can also rename the target categories `{0, 1, 2}` to the more meaningful names of the iris species in `iris.target_names`. (See the [documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.htmlrenaming-categories).)
###Code
iris_df.target.cat.categories = [iris.target_names[i] for i in range(3)]
iris_df # check that the target values are now setosa, versicolor, or virginica.
###Output
_____no_output_____
###Markdown
**Exercise** For nominal attributes, a more meaningful statistics than the mean is the counts of different possible values. To count the number of instances for each flower class, assign `target_counts` to the output of the `value_counts` method of an appropriate column of `iris_df`.Your solution should look like:```Pythontarget_counts = iris_df.target.___()```
###Code
# YOUR CODE HERE
raise NotImplementedError()
target_counts
# tests
assert target_counts.shape == (3, )
###Output
_____no_output_____
###Markdown
**How to select specific rows and columns?** The following uses [`ipywidget`](https://ipywidgets.readthedocs.io/en/latest/) to show the various ways of selecting/slicing the rows of a `DataFrame`.
###Code
from ipywidgets import interact
@interact(command=[
'iris_df.head()', 'iris_df[0:4]', 'iris_df.iloc[0:4]', 'iris_df.loc[0:4]',
'iris_df.loc[iris_df.index.isin(range(0,4))]',
'iris_df.loc[lambda df: df.target==0]', 'iris_df.tail()', 'iris_df[-1:]'
])
def select_rows(command):
output = eval(command)
display.display(output)
###Output
_____no_output_____
###Markdown
The following shows the various ways of slicing different columns.
###Code
@interact(command=[
'iris_df.target', 'iris_df["target"]', 'iris_df[["target"]]',
'iris_df[iris_df.columns[:-1]]',
'iris_df.loc[:,iris_df.columns[0]:iris_df.columns[-1]]',
'iris_df.loc[:,~iris_df.columns.isin(["target"])]', 'iris_df.iloc[:,:-1]'
])
def select_columns(command):
output = eval(command)
display.display(output)
###Output
_____no_output_____
###Markdown
For instance, to compute the mean values of the input features for iris setosa:
###Code
iris_df[lambda df: df.target == 'setosa'].mean()
###Output
_____no_output_____
###Markdown
We can also use the method `groupby` to obtain the mean values by flower types:
###Code
iris_df.groupby(['target']).mean()
###Output
_____no_output_____
###Markdown
**Exercise** Create a new `DataFrame` `iris2d_df`. Note that you may also use the method `drop`.
###Code
# to learn how to use drop
?iris_df.drop
# YOUR CODE HERE
raise NotImplementedError()
iris2d_df
# tests
assert set(
iris2d_df.columns) == {'petal length (cm)', 'petal width (cm)', 'target'}
###Output
_____no_output_____
###Markdown
Alternatives methods of loading a dataset The following code loads the iris dataset from an [ARFF file](https://waikato.github.io/weka-wiki/formats_and_processing/arff/) instead.
###Code
from scipy.io import arff
import urllib.request
import io
ftpstream = urllib.request.urlopen(
'https://raw.githubusercontent.com/Waikato/weka-3.8/master/wekadocs/data/iris.arff'
)
iris_arff = arff.loadarff(io.StringIO(ftpstream.read().decode('utf-8')))
iris_df2 = pd.DataFrame(iris_arff[0])
iris_df2['class'] = iris_df2['class'].astype('category')
iris_df2
from scipy.io import arff
import urllib.request
import io
ftpstream = urllib.request.urlopen(
'https://www.openml.org/data/download/61/dataset_61_iris.arff')
iris_arff = arff.loadarff(io.StringIO(ftpstream.read().decode('utf-8')))
iris_df2 = pd.DataFrame(iris_arff[0])
iris_df2['class'] = iris_df2['class'].astype('category')
iris_df2
###Output
_____no_output_____
###Markdown
Pandas also provides a method to read the iris dataset directly from a CSV file locally or from the internet such as the [UCI respository](https://archive.ics.uci.edu/ml/datasets/iris).
###Code
iris_df3 = pd.read_csv(
'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data',
sep=',',
dtype={'target': 'category'},
header=None,
names=iris.feature_names + ['target'],
)
iris_df3
###Output
_____no_output_____
###Markdown
The additional arguments `dtype`, `header`, and `names`, which allow us want specify the attribute datatypes and names. Unlike the ARFF format, CSV file may not contain such information. Training and Testing To give an unbiased performance estimate of a learning algorithm of interest, the fundamental principle is *to use separate datasets for training and testing*. If there is only one dataset, we should split it into *training sets* and *test sets* by *random sampling* to avoid bias in the performance estimate. In the following subsections, we will illustrate some methods of splitting the datasets for training and testing. Stratified holdout method We randomly samples data for training or testing without replacement. It is implemented by the `train_test_split` function from the `sklearn.model_selection` package.
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(
iris_df[iris.feature_names],
iris_df.target,
test_size=0.2, # fraction for test
random_state=1) # random seed
X_train.shape, X_test.shape, Y_train.shape, Y_test.shape
###Output
_____no_output_____
###Markdown
We also separated the input features and target for the training and test sets. The fraction of holdout test data is
###Code
len(Y_test) / (len(Y_test) + len(Y_train))
###Output
_____no_output_____
###Markdown
The class proportion of the iris dataset is:
###Code
iris_df.target.value_counts().plot(kind='bar', ylabel='counts')
###Output
_____no_output_____
###Markdown
We can check that the class proportions for the test and training sets are maintained in expectation:
###Code
@interact(data=['Y_train', 'Y_test'], seed=(0, 10))
def class_proportions(data, seed=0):
Y_train, Y_test = train_test_split(iris_df.target,
test_size=0.2,
random_state=seed)
eval(data).value_counts().sort_index().plot(kind='bar', ylabel='counts')
###Output
_____no_output_____
###Markdown
We first apply a learning algorithm to train a classifier using only the training set. Let's say we want to evaluate the decision tree induction algorithm in `sklearn`.
###Code
from sklearn import tree
clf = tree.DecisionTreeClassifier(random_state=0) # the training is also randomized
clf.fit(X_train, Y_train) # fit the model to the training set
###Output
_____no_output_____
###Markdown
We can use the `predict` method of the classifier to predict the flower type from input features.
###Code
Y_pred = clf.predict(X_test)
###Output
_____no_output_____
###Markdown
The following code returns the accuracy of the classifier, namely, the fraction of correct predictions on the test set.
###Code
accuracy_holdout = (Y_pred == Y_test).mean()
accuracy_holdout
###Output
_____no_output_____
###Markdown
The `score` method performs the same computation. The following uses f-string to format the accuracy to 3 decimal places.
###Code
print(f'Accuracy: {clf.score(X_test, Y_test):0.3f}')
###Output
_____no_output_____
###Markdown
To see input features of misclassified test instances:
###Code
X_test[Y_pred != Y_test]
###Output
_____no_output_____
###Markdown
**Exercise** Assign `accuracy_holdout_training_set` to the accuracy of the predictions on the training set. Note that the accuracy is overly optimistic.
###Code
# YOUR CODE HERE
raise NotImplementedError()
accuracy_holdout_training_set
# hidden tests
###Output
_____no_output_____
###Markdown
**Exercise** Complete the following function which applies random subsampling to reduce the variance of the accuracy estimate. In particular, the function `subsampling_score` should return the average of `N` accuracies of $20\%$ stratified holdout with random seed set from `0` up to `N-1`, where `N` is the integer input argument of the function.
###Code
import numpy as np
from functools import lru_cache
@lru_cache(None) # cache the return value to avoid repeated computation
def holdout_score(seed):
clf = tree.DecisionTreeClassifier(random_state=seed)
X_train, X_test, Y_train, Y_test = train_test_split(iris_df[iris.feature_names],
iris_df.target,
test_size=0.2,
random_state=seed)
# YOUR CODE HERE
raise NotImplementedError()
@lru_cache(None)
def subsampling_score(N):
return sum(holdout_score(i) for i in range(N))/N
# tests
assert np.isclose(subsampling_score(50), 0.9466666666666663)
###Output
_____no_output_____
###Markdown
The following code plots the mean accuracies for different `N`. The variance should be smaller as `N` increases.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.stem([subsampling_score(i) for i in range(1,50)])
plt.xlabel(r'$N$')
plt.ylabel(r'Mean accuracy')
###Output
_____no_output_____
###Markdown
The documentation [here](https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/modules/generated/sklearn.cross_validation.Bootstrap.html) describes another alternative called the boostrap method, which samples without replacement. Stratified cross-validation Another method of evaluating a classification algorithm is to randomly partition the data into $k$ *folds*, which are nearly equal-sized blocks of instances. The score is the average of the accuracies obtained by using each fold to test a classifier trained using the remaining folds. The module `sklearn.model_selection` provides two functions `cross_val_predict` and `cross_val_score` for this purpose.
###Code
from sklearn.model_selection import cross_val_predict, cross_val_score, StratifiedKFold
cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
###Output
_____no_output_____
###Markdown
For instance, the following returns the misclassified instances by 5-fold cross-validation.
###Code
iris_df['prediction'] = pd.Categorical(cross_val_predict(clf, iris_df[iris.feature_names], iris_df.target, cv=cv))
iris_df.loc[lambda df: df['target'] != df['prediction']]
clf = tree.DecisionTreeClassifier(random_state=0)
scores = cross_val_score(clf, iris_df[iris.feature_names], iris_df.target, cv=5)
print('Accuracies: ',', '.join(f'{acc:.4f}' for acc in scores))
print(f'Mean accuracy: {scores.mean():.4f}')
###Output
_____no_output_____
###Markdown
**Exercise** Assign `accuracy_cv` to the accuracy obtained by the cross validation result above.
###Code
# YOUR CODE HERE
raise NotImplementedError()
accuracy_cv
# hidden tests
###Output
_____no_output_____ |
monte-carlo/Monte_Carlo.ipynb | ###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(16, 3, False)
End game! Reward: 1.0
You won :)
(16, 1, False)
(20, 1, False)
End game! Reward: -1
You lost :(
(9, 5, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((15, 9, False), 1, -1)]
[((6, 10, False), 1, 0), ((16, 10, False), 1, 0), ((21, 10, False), 1, -1)]
[((13, 10, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episodes=generate_episode_from_limit_stochastic(env)
states,actions,rewards=zip(*episodes)
gammas=[gamma**i for i in range(len(rewards)+1)]
for i in range(len(episodes)):
N[states[i]][actions[i]]+=1
returns_sum[states[i]][actions[i]]+=np.sum(np.array(rewards)[i:]*np.array(gammas)[:-(i+1)])
#for i in range(len(episodes)):
Q[states[i]][actions[i]]=returns_sum[states[i]][actions[i]]/N[states[i]][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episodes_from_Q(env,Q,epsilon,nA):
episode=[]
state=env.reset()
while True:
if state in Q:
action=np.random.choice(np.arange(nA),p=get_probs(Q[state],epsilon,nA))
else:
action=env.action_space.sample()
next_state,reward,done,info = env.step(action)
episode.append((state,action,reward))
state=next_state
if done:
break
return episode
def get_probs(Q_s,epsilon,nA):
policy_s=np.ones(nA)*epsilon/nA
best_a=np.argmax(Q_s)
policy_s[best_a]=1-epsilon+(epsilon/nA)
return policy_s
def update_Q(env,episode,Q,alpha,gamma):
states, actions, rewards=zip(*episode)
discounts=np.array([gamma**i for i in range(len(episode)+1)])
for i,state in enumerate(states):
old_Q=Q[state][actions[i]]
Q[state][actions[i]]=old_Q+alpha*(sum(rewards[i:]*discounts[:-(i+1)])-old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0,eps_start=1.0,eps_decay=0.99999,eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon=eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon=max(epsilon*eps_decay,eps_min)
episode=generate_episodes_from_Q(env,Q,epsilon,nA)
Q=update_Q(env,episode,Q,alpha,gamma)
policy=dict((k,np.argmax(v)) for k,v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, num_episodes=5000000, alpha=0.05)
###Output
Episode 5000000/5000000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(5):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(12, 10, False)
End game! Reward: -1.0
You lost :(
(14, 9, False)
End game! Reward: -1.0
You lost :(
(15, 10, False)
End game! Reward: -1
You lost :(
(17, 10, False)
End game! Reward: -1
You lost :(
(17, 10, False)
(20, 10, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
tuples = generate_episode_from_limit_stochastic(env)
print("Tuples \n",tuples)
s,a,r = zip(*tuples)
print("States \n",s)
print("Actions \n",a)
print("Rewards \n",r)
###Output
Tuples
[((14, 5, False), 1, -1)]
States
((14, 5, False),)
Actions
(1,)
Rewards
(-1,)
Tuples
[((20, 10, False), 0, 1.0)]
States
((20, 10, False),)
Actions
(0,)
Rewards
(1.0,)
Tuples
[((13, 1, False), 1, 0), ((16, 1, False), 0, 1.0)]
States
((13, 1, False), (16, 1, False))
Actions
(1, 0)
Rewards
(0, 1.0)
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# Get the S,A,R set of states,actions and rewards
s,a,r = zip(*generate_episode(env))
# List of discounts
discounts = np.array([gamma**i for i in range(len(r)+1)])
## TODO: complete the function
for i,state in enumerate(s):
# immediate reward and all future rewards, discounted
returns_sum[state][a[i]] += sum(r[i:]*discounts[:-(1+i)])
N[state][a[i]] += 1.0
# Average First visit
Q[state][a[i]] = returns_sum[state][a[i]] / N[state][a[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env,500000, 0.2)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import time
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(18, 3, False)
End game! Reward: -1
You lost :(
(21, 9, True)
(18, 9, False)
End game! Reward: -1
You lost :(
(13, 10, False)
(17, 10, False)
(20, 10, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((9, 10, False), 1, 0), ((19, 10, False), 1, -1)]
[((18, 10, False), 1, 0), ((21, 10, False), 0, 0.0)]
[((12, 9, False), 1, 0), ((19, 9, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0, every_visit=True):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
is_first_visit = defaultdict(lambda: np.full(env.action_space.n, True))
# walk backwards from the last action to the first
G_t_1 = 0.0
partial_returns = []
for t in reversed(range(len(episode))):
state, action, reward = episode[t]
# calculate return G_t from this point
G_t = reward + gamma * G_t_1
G_t_1 = G_t
partial_returns.insert(0, G_t)
for t in range(len(episode)):
state, action, reward = episode[t]
G_t = partial_returns[t]
# check for first-visit for this episode, if requested
if every_visit or is_first_visit[state][action]:
is_first_visit[state][action] = False
# recalculate the average
returns_sum[state][action] += G_t
N[state][action] += 1.0
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
start_time = time.time()
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
end_time = time.time()
print('Time', end_time-start_time)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.Time 73.36299920082092
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def get_greedy_action(Q, state):
# if there are two or more actions for which Q[s][a] is maximized, choose uniformly between them
greedy_actions = np.argwhere(Q[state] == np.amax(Q[state]))
greedy_actions = greedy_actions.flatten()
return np.random.choice(greedy_actions)
def generate_episode_eps_policy(env, Q, eps):
nA = env.action_space.n
episode = []
state = env.reset()
while True:
# with probability eps choose random action, 1-eps greedy action
action = np.random.choice(np.arange(nA)) if np.random.uniform() <= eps \
else get_greedy_action(Q, state)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def improve_q_from_episode(Q, policy, episode, alpha, gamma=1.0, every_visit=False):
is_first_visit = defaultdict(lambda: np.full(env.action_space.n, True))
# walk backwards from the last action to the first
G_t_1 = 0.0
partial_returns = []
for t in reversed(range(len(episode))):
state, action, reward = episode[t]
# calculate return G_t from this point
G_t = reward + gamma * G_t_1
G_t_1 = G_t
partial_returns.insert(0, G_t)
for t in range(len(episode)):
state, action, reward = episode[t]
G_t = partial_returns[t]
# check for first-visit for this episode, if requested
if every_visit or is_first_visit[state][action]:
is_first_visit[state][action] = False
# recalculate the average and update the policy
Q[state][action] += alpha * (G_t - Q[state][action])
policy[state] = get_greedy_action(Q, state)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps=1, final_eps=0.1, stop_eps_after=0.5, every_visit=False):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
policy = defaultdict(lambda: np.choice(np.arange(nA)))
# eps will decrease linearly and reach final_eps in episode stop_eps_at_episode
final_eps = min(eps, final_eps)
stop_eps_at_episode = num_episodes * stop_eps_after - 1
eps_delta = (eps - final_eps) / stop_eps_at_episode
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate episode with current policy and eps
episode = generate_episode_eps_policy(env, Q, eps)
eps -= eps_delta
# for each state-action pair, get return and update q-table and policy
Q = improve_q_from_episode(Q, policy, episode, alpha, gamma, every_visit)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
start_time = time.time()
policy, Q = mc_control(env, 500000, 0.01) # eps will go from 1 to .1 in 250000 episodes
end_time = time.time()
print('Time', end_time-start_time)
###Output
Episode 500000/500000.Time 89.53930568695068
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(5, 4, False)
(12, 4, False)
End game! Reward: -1.0
You lost :(
(10, 1, False)
(12, 1, False)
End game! Reward: -1.0
You lost :(
(13, 10, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((17, 10, True), 1, 0), ((21, 10, True), 0, 1.0)]
[((12, 4, False), 1, 0), ((17, 4, False), 0, -1.0)]
[((15, 10, False), 1, 0), ((20, 10, False), 0, 0.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n)) # returns
N = defaultdict(lambda: np.zeros(env.action_space.n)) # number of occurences (first-visit)
Q = defaultdict(lambda: np.zeros(env.action_space.n)) # returns_sum/N averaged
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
# Iterate timesteps of a single episode
states, actions, rewards = zip(*episode)
# Gamma discount vector
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# For each state-action pair update returns_sum, N, Q for this episode
for i, state in enumerate(states):
# Gt = R_{t+1} + y R_{t+2} + y^2 R_{t+3} + ...
goal = sum(rewards[i:] * discounts[:-(i+1)])
# Update return_sums, N, Q
returns_sum[state][actions[i]] += goal
N[state][actions[i]] += 1
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
# Generate an episole based on e-greedy policy
def generate_episode_from_Q(env, eps, Q, nA):
episode = []
state = env.reset()
while True:
# Get our e-greedy probabilities
probs = get_probs(eps, Q[state], nA)
# Choose an action with prob p based on our policy or random if
action = np.random.choice(np.arange(nA), p=probs) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
# return policy for each action given a state i.e. pi(a|s)
def get_probs(eps, Q_s, nA):
policy_s = np.ones(nA) * eps/nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - eps + (eps / nA)
return policy_s
# Update entries to the action-value Q matrix based on an episode
def update_Q(Q, episode, alpha, gamma):
# Q(s,a) <- Q(s,a) + alpha * (g - Q(s,a))
# Update Q matrix with the most recent episode
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
g = np.sum(rewards[i:] * discounts[:-(i+1)])
Q[state][actions[i]] += alpha*(g - Q[state][actions[i]])
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
# size of action space
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
eps = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# Eps decay capped at a min value
eps = max(eps*eps_decay, eps_min)
# Generate an episode following eps-greedy policy
episode = generate_episode_from_Q(env, eps, Q, nA)
# Update our Q matrix using this episode's findings
Q = update_Q(Q, episode, alpha, gamma)
# Obtain the estimated optimal policy and a-v function
# Q contains for each state the action-values - we take the max q
policy = {s:np.argmax(q) for s,q in Q.items()}
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000*2, 0.02)
###Output
Episode 1000000/1000000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate an episode
episode = generate_episode(env)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma ** i for i in range(len(rewards) + 1)])
# for i, state in enumerate(states):
# print('--------')
# print('episode:', episode)
# print('discounts: ', discounts)
# print('sum(', rewards[i:], ') * ', discounts[:-(1 + i)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:] * discounts[:-(1 + i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
# print('******')
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# Q = mc_prediction_q(env, 5, generate_episode_from_limit_stochastic, gamma=0.2)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
epsilon = eps_start
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
num_episodes = 500000
alpha = 0.02
policy, Q = mc_control(env, num_episodes, alpha, eps_decay=.999, eps_min=0.02)
print(policy)
###Output
Episode 500000/500000.{(14, 8, False): 1, (10, 9, False): 1, (20, 4, False): 0, (10, 10, False): 1, (14, 4, False): 0, (13, 2, False): 0, (19, 2, False): 0, (19, 10, False): 0, (20, 5, False): 0, (11, 6, False): 1, (17, 6, False): 0, (14, 7, False): 1, (12, 5, False): 0, (13, 10, False): 1, (11, 10, False): 1, (12, 10, True): 1, (12, 1, False): 1, (7, 3, False): 1, (17, 3, False): 0, (21, 3, False): 0, (12, 10, False): 1, (16, 1, False): 1, (19, 1, False): 0, (14, 6, False): 0, (14, 10, False): 1, (21, 10, False): 0, (19, 9, False): 0, (15, 4, False): 0, (20, 10, False): 0, (21, 7, True): 0, (16, 7, False): 1, (15, 4, True): 1, (15, 2, False): 0, (16, 8, True): 1, (13, 3, True): 1, (13, 3, False): 0, (20, 3, False): 0, (15, 10, False): 1, (15, 1, False): 1, (15, 9, False): 0, (20, 9, False): 0, (16, 6, False): 0, (13, 9, False): 1, (9, 3, False): 1, (17, 5, False): 0, (18, 10, False): 0, (15, 8, False): 1, (13, 4, False): 0, (17, 4, False): 0, (20, 2, False): 0, (13, 7, False): 1, (12, 1, True): 1, (14, 1, True): 1, (18, 5, False): 0, (19, 6, False): 0, (17, 9, False): 0, (19, 4, False): 0, (20, 1, False): 0, (7, 2, False): 1, (8, 1, False): 1, (9, 10, False): 1, (12, 8, False): 1, (16, 9, False): 1, (20, 7, False): 0, (16, 10, False): 1, (20, 9, True): 0, (12, 9, False): 1, (12, 2, False): 0, (16, 5, False): 0, (21, 5, False): 0, (6, 7, False): 1, (10, 4, False): 1, (7, 10, False): 1, (18, 3, False): 0, (14, 10, True): 1, (10, 1, False): 1, (14, 1, False): 1, (21, 1, False): 0, (10, 6, False): 1, (18, 6, False): 0, (18, 2, False): 0, (15, 6, False): 0, (20, 10, True): 0, (17, 10, False): 0, (8, 7, False): 1, (17, 1, True): 1, (20, 8, False): 0, (15, 5, False): 0, (9, 9, False): 1, (17, 8, True): 1, (17, 2, False): 0, (17, 7, False): 0, (6, 10, False): 1, (16, 3, True): 1, (21, 3, True): 0, (7, 4, False): 1, (4, 3, False): 1, (10, 3, False): 1, (16, 3, False): 0, (21, 6, True): 0, (21, 6, False): 0, (19, 5, False): 0, (18, 9, False): 0, (12, 6, False): 0, (21, 9, False): 0, (16, 5, True): 1, (20, 3, True): 0, (12, 4, True): 1, (19, 4, True): 0, (15, 9, True): 1, (14, 9, False): 1, (21, 2, False): 0, (14, 7, True): 1, (16, 7, True): 1, (7, 7, False): 1, (20, 2, True): 0, (5, 5, False): 1, (5, 7, False): 1, (15, 7, False): 1, (21, 8, False): 0, (13, 6, False): 0, (15, 7, True): 1, (13, 1, False): 1, (17, 8, False): 0, (16, 8, False): 1, (12, 3, False): 1, (18, 2, True): 0, (21, 10, True): 0, (8, 2, False): 0, (12, 4, False): 1, (6, 9, False): 1, (18, 3, True): 0, (9, 6, False): 1, (20, 6, False): 0, (19, 7, False): 0, (20, 1, True): 0, (16, 10, True): 1, (8, 10, False): 1, (21, 7, False): 0, (21, 2, True): 0, (17, 10, True): 1, (19, 10, True): 0, (18, 1, False): 0, (21, 4, True): 0, (18, 7, False): 0, (21, 8, True): 0, (11, 5, False): 1, (18, 8, False): 0, (6, 4, False): 1, (11, 2, False): 1, (10, 5, False): 1, (4, 8, False): 1, (10, 8, False): 1, (7, 9, False): 1, (13, 5, False): 0, (21, 1, True): 0, (11, 4, False): 1, (21, 4, False): 0, (17, 1, False): 0, (14, 2, False): 0, (15, 5, True): 1, (8, 6, False): 1, (7, 8, False): 1, (18, 6, True): 1, (13, 10, True): 1, (14, 3, False): 0, (13, 8, False): 1, (16, 9, True): 1, (12, 7, False): 1, (19, 5, True): 0, (11, 9, False): 1, (15, 3, True): 1, (17, 4, True): 1, (19, 3, False): 0, (18, 4, False): 0, (5, 1, False): 1, (19, 8, False): 0, (16, 4, False): 0, (5, 10, False): 1, (18, 10, True): 0, (14, 5, False): 0, (9, 5, False): 1, (10, 2, False): 1, (15, 3, False): 0, (13, 7, True): 1, (5, 3, False): 1, (19, 3, True): 0, (9, 1, False): 1, (10, 7, False): 1, (11, 1, False): 1, (19, 1, True): 0, (6, 1, False): 1, (12, 7, True): 1, (13, 6, True): 1, (11, 3, False): 1, (18, 1, True): 0, (8, 4, False): 1, (15, 10, True): 1, (13, 9, True): 1, (7, 5, False): 1, (15, 1, True): 1, (11, 8, False): 1, (19, 9, True): 0, (14, 2, True): 1, (9, 4, False): 1, (21, 9, True): 0, (14, 6, True): 1, (8, 8, False): 1, (17, 7, True): 1, (20, 4, True): 0, (4, 6, False): 1, (7, 1, False): 1, (16, 4, True): 0, (8, 5, False): 1, (8, 9, False): 1, (7, 6, False): 1, (13, 5, True): 1, (11, 7, False): 1, (16, 2, False): 0, (5, 9, False): 1, (17, 5, True): 1, (17, 2, True): 0, (20, 6, True): 0, (9, 2, False): 1, (17, 9, True): 1, (5, 2, False): 0, (6, 5, False): 1, (21, 5, True): 0, (18, 9, True): 0, (18, 5, True): 0, (14, 3, True): 1, (18, 8, True): 0, (19, 6, True): 0, (9, 7, False): 1, (16, 6, True): 1, (8, 3, False): 1, (14, 9, True): 1, (9, 8, False): 1, (5, 8, False): 0, (17, 3, True): 1, (4, 4, False): 1, (6, 6, False): 0, (13, 2, True): 1, (5, 6, False): 1, (6, 3, False): 0, (18, 7, True): 0, (14, 8, True): 1, (12, 9, True): 1, (13, 4, True): 1, (4, 7, False): 1, (4, 10, False): 1, (18, 4, True): 0, (13, 8, True): 1, (19, 8, True): 0, (6, 2, False): 1, (16, 2, True): 1, (19, 2, True): 0, (14, 5, True): 1, (19, 7, True): 0, (15, 2, True): 1, (14, 4, True): 1, (20, 7, True): 0, (12, 5, True): 1, (15, 6, True): 1, (15, 8, True): 1, (20, 8, True): 0, (6, 8, False): 1, (20, 5, True): 0, (4, 5, False): 0, (13, 1, True): 1, (5, 4, False): 1, (12, 6, True): 1, (12, 8, True): 1, (4, 2, False): 0, (16, 1, True): 1, (12, 3, True): 1, (17, 6, True): 1, (4, 9, False): 1, (12, 2, True): 1, (4, 1, False): 1}
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
#print(env.action_space)
action = env.action_space.sample() #random policy
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(13, 2, False)
End game! Reward: -1.0
You lost :(
(18, 8, False)
End game! Reward: -1
You lost :(
(21, 4, True)
(12, 4, False)
(17, 4, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((15, 2, False), 1, -1)]
[((12, 2, False), 1, 0), ((20, 2, False), 0, 1.0)]
[((15, 6, False), 1, 0), ((16, 6, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
#print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
if i_episode % 1000 == 0:
episode = generate_episode_from_limit_stochastic(env)
#print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
reward = episode[-1][2]
for el in episode:
state = el[0]
action = el[1]
#reward = el[2]
N[state][action] = N[state][action] + 1
returns_sum[state][action] = returns_sum[state][action] + reward
## TODO: complete the function
#print(N.keys())
for key in N.keys():
for action in np.arange(2):
#print(key[0])
Q[key][action] = returns_sum[key][action]/N[key][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000000, generate_episode_from_limit_stochastic)
#print(Q.items)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_policy(bj_env, policy):
#print('generating episode from current optimised policy...')
episode = []
state = bj_env.reset()
while True:
action = policy[state]
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
import random
def mc_control(env, num_episodes, alpha, gamma=0.9):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
policy = defaultdict(lambda: np.zeros(1))
epsilon = 1
teller = 1
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
#print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
#sys.stdout.flush()
teller = teller + 1
rand = random.uniform(0, 1)
#print(rand)
#print(epsilon/float(teller))
if rand < epsilon/float(teller):
episode = generate_episode_from_limit_stochastic(env)
else:
try:
episode = generate_episode_from_policy(env, policy)
except:
episode = generate_episode_from_limit_stochastic(env)
reward = episode[-1][2]
for el in episode:
state = el[0]
action = el[1]
Q[state][action] = Q[state][action] + alpha * (reward - Q[state][action])
## TODO: complete the function
for state in Q.keys():
#policy[state] = max([Q[state][0],Q[state][1]])
policy[state] = np.argmax([Q[state][0],Q[state][1]])
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.1)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
#print(policy)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 4, False)
End game! Reward: -1.0
You lost :(
(15, 1, False)
(19, 1, False)
End game! Reward: -1
You lost :(
(16, 7, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((12, 7, False), 1, 0), ((19, 7, False), 1, -1)]
[((17, 5, False), 1, 0), ((21, 5, False), 0, 1.0)]
[((12, 7, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# Generate an episode
episode = generate_episode(env)
# Seperate out the states, actions and rewards using zip
states, actions, rewards = zip(*episode)
# Prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# Function estiamate for each action value pair in the episode
for i, state in enumerate(states):
N[state][actions[i]]+=1.0
returns_sum[state][actions[i]]+=sum(rewards[i:]*discounts[:-(1+i)])
Q[state][actions[i]]=returns_sum[state][actions[i]]/N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def epsilon_greedy(Q, state, epsilon, nA):
# Returns an action
# - (1-epsilon) Basically argmax for the corresponding value of actions in Q
# - epsilon Any action from Q
if np.random.random()>epsilon:
#print('Selecting argmax')
return np.argmax(Q[state])
else:
return np.random.choice(range(nA))
def generate_episode_using_policy(bj_env, Q, epsilon, nA):
episode = []
state = bj_env.reset()
while True:
action = epsilon_greedy(Q, state, epsilon, nA)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done==True:
break
return episode
def update_Q(episode, Q, alpha, gamma):
# Separate the states, actions and rewards
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]]=old_Q+alpha*(sum(rewards[i:]*discounts[:-(1+i)])-old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1, eps_decay=0.999999, eps_min = 0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epsilon = max(epsilon*eps_decay,eps_min)
# policy = epsilon_greedy(Q)
# Generate an episode
episode = generate_episode_using_policy(env, Q, epsilon, nA)
# Update Q function
Q = update_Q(episode, Q, alpha, gamma)
policy = dict((k,np.argmax(v)) for k,v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, num_episodes=500000, alpha=0.02, eps_start=1, eps_decay=0.9999, eps_min = 0.05)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(6, 10, False)
(11, 10, False)
(16, 10, False)
End game! Reward: 1.0
You won :)
(11, 1, False)
(21, 1, False)
End game! Reward: 1.0
You won :)
(17, 1, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((12, 1, False), 1, 0), ((17, 1, False), 1, -1)]
[((13, 4, False), 1, 0), ((17, 4, False), 1, -1)]
[((14, 1, True), 1, 0), ((21, 1, True), 0, 0.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## Complete the function - AshD
# generate an episode
episode = generate_episode(env)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
N[state][actions[i]] += 1
returns_sum[state][actions[i]] += sum(rewards[i:] * discounts[:-(1+i)])
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 3, generate_episode_from_limit_stochastic)
print(Q.items())
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
dict_items([((20, 2, False), array([1., 0.])), ((8, 3, False), array([0., 1.])), ((13, 3, False), array([0., 1.])), ((15, 3, False), array([1., 0.])), ((11, 10, False), array([ 0., -1.])), ((15, 10, False), array([ 0., -1.])), ((16, 10, False), array([ 0., -1.]))])
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + epsilon / nA
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
# extract states, actions, rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update Q[s][a] values
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start = 1, eps_decay=0.99999, eps_min = 0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## Complete the function - AshD
# decrement value of epsilon every episode by decay factor
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
print('dictQItems=',Q.items())
print('policy=',policy)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 100, 0.02)
###Output
dictQItems= dict_items([((20, 10, False), array([ 0.02 , -0.0396])), ((11, 3, False), array([-0.02, 0. ])), ((13, 2, False), array([-0.0396, 0. ])), ((12, 10, False), array([ 0. , -0.02])), ((18, 9, False), array([ 0.019608, -0.02 ])), ((11, 8, False), array([ 0. , -0.02])), ((13, 8, False), array([-0.0396, -0.02 ])), ((19, 5, False), array([0.02, 0. ])), ((14, 8, False), array([-0.0396, 0. ])), ((18, 5, False), array([ 0. , -0.02])), ((21, 5, False), array([ 0. , -0.0396])), ((20, 2, False), array([ 0.02 , -0.0396])), ((9, 10, False), array([-0.02, 0.02])), ((19, 10, False), array([0.019608, 0. ])), ((5, 2, False), array([ 0. , -0.02])), ((11, 2, False), array([ 0. , -0.02])), ((20, 7, False), array([0.0396, 0. ])), ((9, 4, False), array([0. , 0.02])), ((16, 4, False), array([0.0396, 0. ])), ((10, 4, False), array([ 0. , -0.02])), ((14, 4, False), array([-0.0396, 0. ])), ((13, 10, False), array([ 0. , -0.02])), ((20, 6, False), array([ 0.02, -0.02])), ((8, 10, False), array([-0.0396, 0. ])), ((13, 7, False), array([ 0. , -0.02])), ((18, 1, False), array([ 0. , -0.02])), ((21, 1, False), array([ 0. , -0.02])), ((19, 1, False), array([0.0004, 0. ])), ((15, 3, True), array([-0.02, 0. ])), ((10, 9, False), array([0. , 0.02])), ((20, 9, False), array([0.02, 0. ])), ((16, 6, False), array([0. , 0.02])), ((17, 6, False), array([0.02, 0. ])), ((21, 6, True), array([0., 0.])), ((16, 2, True), array([ 0. , -0.02])), ((19, 2, True), array([-0.02, 0. ])), ((13, 3, False), array([-0.02, 0. ])), ((15, 3, False), array([-0.0004, -0.02 ])), ((17, 10, False), array([ 0. , -0.02])), ((20, 4, False), array([-0.02, -0.02])), ((20, 3, False), array([ 0. , -0.0396])), ((14, 7, False), array([-0.02, 0. ])), ((12, 9, False), array([ 0. , -0.02])), ((12, 7, False), array([0. , 0.02])), ((16, 7, False), array([0. , 0.02])), ((19, 7, False), array([0. , 0.02])), ((21, 5, True), array([ 0.02 , -0.0396])), ((20, 5, False), array([0.02, 0. ])), ((5, 9, False), array([ 0. , -0.02])), ((15, 9, False), array([-0.02, 0. ])), ((21, 7, True), array([0.02, 0. ])), ((15, 8, False), array([ 0. , -0.02])), ((20, 8, False), array([ 0.02, -0.02])), ((13, 4, False), array([-0.02, 0. ])), ((10, 3, False), array([0.02, 0.02])), ((21, 3, True), array([0.02, 0. ])), ((16, 4, True), array([0. , 0.02])), ((14, 10, False), array([-0.02, -0.02])), ((7, 5, False), array([ 0. , -0.02])), ((11, 5, False), array([-0.02, 0. ])), ((5, 1, False), array([ 0. , -0.02])), ((13, 1, False), array([ 0. , -0.02])), ((16, 1, False), array([ 0. , -0.02])), ((14, 1, False), array([-0.0396, 0. ])), ((15, 2, False), array([-0.02, 0. ])), ((12, 4, False), array([-0.0004, 0. ])), ((12, 6, False), array([0.02, 0. ])), ((7, 7, False), array([-0.02, 0. ])), ((15, 10, False), array([ 0. , -0.0396])), ((17, 7, False), array([0.02, 0. ])), ((19, 2, False), array([ 0. , -0.02])), ((7, 4, False), array([-0.02, 0. ])), ((7, 2, False), array([-0.02, 0. ])), ((10, 10, False), array([-0.02, 0. ])), ((15, 5, True), array([-0.02, 0. ])), ((11, 10, False), array([-0.02, 0. ])), ((15, 1, False), array([-0.02, 0. ])), ((14, 9, True), array([0. , 0.02])), ((21, 9, True), array([0.02, 0. ])), ((16, 2, False), array([0.02, 0. ])), ((17, 5, False), array([-0.02, 0. ])), ((12, 5, False), array([ 0. , -0.02])), ((9, 3, False), array([ 0. , -0.02])), ((19, 3, False), array([ 0. , -0.02])), ((16, 5, False), array([ 0. , -0.02])), ((12, 1, False), array([-0.02, 0. ])), ((7, 3, False), array([0. , 0.02])), ((15, 7, False), array([-0.02, 0. ])), ((13, 9, True), array([ 0. , -0.02])), ((13, 9, False), array([ 0. , -0.02])), ((16, 10, False), array([ 0. , -0.02])), ((7, 1, False), array([-0.02, 0. ])), ((6, 3, False), array([0. , 0.02])), ((16, 3, False), array([ 0. , -0.02])), ((12, 3, False), array([ 0.02, -0.02])), ((18, 3, False), array([0.02, 0. ])), ((15, 4, False), array([-0.02, 0. ])), ((19, 9, False), array([-0.02, 0. ])), ((13, 5, False), array([ 0. , -0.02]))])
policy= {(20, 10, False): 0, (11, 3, False): 1, (13, 2, False): 1, (12, 10, False): 0, (18, 9, False): 0, (11, 8, False): 0, (13, 8, False): 1, (19, 5, False): 0, (14, 8, False): 1, (18, 5, False): 0, (21, 5, False): 0, (20, 2, False): 0, (9, 10, False): 1, (19, 10, False): 0, (5, 2, False): 0, (11, 2, False): 0, (20, 7, False): 0, (9, 4, False): 1, (16, 4, False): 0, (10, 4, False): 0, (14, 4, False): 1, (13, 10, False): 0, (20, 6, False): 0, (8, 10, False): 1, (13, 7, False): 0, (18, 1, False): 0, (21, 1, False): 0, (19, 1, False): 0, (15, 3, True): 1, (10, 9, False): 1, (20, 9, False): 0, (16, 6, False): 1, (17, 6, False): 0, (21, 6, True): 0, (16, 2, True): 0, (19, 2, True): 1, (13, 3, False): 1, (15, 3, False): 0, (17, 10, False): 0, (20, 4, False): 0, (20, 3, False): 0, (14, 7, False): 1, (12, 9, False): 0, (12, 7, False): 1, (16, 7, False): 1, (19, 7, False): 1, (21, 5, True): 0, (20, 5, False): 0, (5, 9, False): 0, (15, 9, False): 1, (21, 7, True): 0, (15, 8, False): 0, (20, 8, False): 0, (13, 4, False): 1, (10, 3, False): 0, (21, 3, True): 0, (16, 4, True): 1, (14, 10, False): 0, (7, 5, False): 0, (11, 5, False): 1, (5, 1, False): 0, (13, 1, False): 0, (16, 1, False): 0, (14, 1, False): 1, (15, 2, False): 1, (12, 4, False): 1, (12, 6, False): 0, (7, 7, False): 1, (15, 10, False): 0, (17, 7, False): 0, (19, 2, False): 0, (7, 4, False): 1, (7, 2, False): 1, (10, 10, False): 1, (15, 5, True): 1, (11, 10, False): 1, (15, 1, False): 1, (14, 9, True): 1, (21, 9, True): 0, (16, 2, False): 0, (17, 5, False): 1, (12, 5, False): 0, (9, 3, False): 0, (19, 3, False): 0, (16, 5, False): 0, (12, 1, False): 1, (7, 3, False): 1, (15, 7, False): 1, (13, 9, True): 0, (13, 9, False): 0, (16, 10, False): 0, (7, 1, False): 1, (6, 3, False): 1, (16, 3, False): 0, (12, 3, False): 0, (18, 3, False): 0, (15, 4, False): 1, (19, 9, False): 1, (13, 5, False): 0}
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!wget -nc -q https://raw.githubusercontent.com/joaopamaral/deep-reinforcement-learning/master/monte-carlo/plot_utils.py
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(11, 10, False)
End game! Reward: -1.0
You lost :(
(13, 2, False)
End game! Reward: -1
You lost :(
(13, 2, False)
(15, 2, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((7, 9, False), 1, 0), ((15, 9, False), 1, 0), ((17, 9, False), 1, 0), ((18, 9, False), 1, -1)]
[((14, 7, False), 1, 0), ((15, 7, False), 1, -1)]
[((12, 5, False), 1, 0), ((13, 5, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards) + 1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += (rewards[i:]*discounts[:-(1+i)]).sum()
N[state][actions[i]] += 1
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500_000, .02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
print(action)
state, reward, done, info = env.step(action)
print("State: " + str(state) + " Reward : " + str(reward) + " Done : " + str(done) + " Info: " + str(info))
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
print()
###Output
(21, 6, True)
0
State: (21, 6, True) Reward : 1.0 Done : True Info: {}
End game! Reward: 1.0
You won :)
(16, 10, False)
0
State: (16, 10, False) Reward : 1.0 Done : True Info: {}
End game! Reward: 1.0
You won :)
(20, 3, False)
0
State: (20, 3, False) Reward : -1.0 Done : True Info: {}
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((9, 6, False), 1, 0), ((19, 6, False), 0, 1.0)]
[((18, 8, False), 1, -1)]
[((7, 10, False), 1, 0), ((11, 10, False), 1, 0), ((16, 10, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# First visit Monte Carlo Method
episode = generate_episode(env)
cumu_return = np.zeros(len(episode))
for i in range(len(episode) - 1, -1, -1):
if i == len(episode) - 1:
temp = 0
else :
temp = cumu_return[i+1]
cumu_return[i] = episode[i][2]*(gamma**i) + temp
visited_state = defaultdict(lambda: np.zeros(env.action_space.n))
index = 0
for state, action, reward in episode:
if visited_state[state][action] == 0:
N[state][action] += 1
returns_sum[state][action] += cumu_return[index]
visited_state[state][action] = 1
index += 1
for state in N:
for action in range(0, env.action_space.n):
if N[state][action] != 0:
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def GLIE(convergence_iters, iter_num, final_val):
if iter_num <= convergence_iters:
epsilon = (((final_val - 1)/convergence_iters)*iter_num) + 1
else:
epsilon = final_val
return epsilon
def epsilon_greedy(Qs, epsilon):
policy_s = epsilon * np.ones(Qs.shape[0])/Qs.shape[0]
max_index = np.argmax(Qs)
policy_s[max_index] = 1 - epsilon + (epsilon/Qs.shape[0])
return policy_s
def generate_episode_from_Q(bj_env, Q, epsilon):
episode = []
state = bj_env.reset()
while True:
if state in Q:
probs = epsilon_greedy(Q[state], epsilon)
action = np.random.choice(np.arange(2), p=probs)
else:
action = env.action_space.sample()
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
policy = defaultdict(lambda: np.zeros(nA))
convergence_iters = int(num_episodes*6/7)
# loop over episodes
for i_episode in range(1, num_episodes+1):
## TODO: complete the function
epsilon = GLIE(convergence_iters, i_episode, 0.1)
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{} epsilon = {} .".format(i_episode, num_episodes, epsilon), end="")
sys.stdout.flush()
episode = generate_episode_from_Q(env, Q, epsilon)
# Compute cummulative return
cumu_return = np.zeros(len(episode))
for i in range(len(episode) - 1, -1, -1):
if i == len(episode) - 1:
temp = 0
else :
temp = cumu_return[i+1]
cumu_return[i] = episode[i][2]*(gamma**i) + temp
visited_state = defaultdict(lambda: np.zeros(env.action_space.n))
index = 0
for state, action, reward in episode:
if visited_state[state][action] == 0:
Q[state][action] = Q[state][action] + alpha * (cumu_return[index] - Q[state][action])
visited_state[state][action] = 1
index += 1
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.006)
###Output
Episode 500000/500000 epsilon = 0.1 .19910119910125 .
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(12, 1, True)
(20, 1, True)
(20, 1, False)
End game! Reward: 1.0
You won :)
(15, 6, False)
End game! Reward: 1.0
You won :)
(12, 10, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((20, 5, True), 0, 0.0)]
[((14, 10, True), 1, 0), ((14, 10, False), 1, -1)]
[((16, 4, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space.n)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
2
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(14, 2, False)
End game! Reward: -1.0
You lost :(
(12, 10, True)
End game! Reward: -1.0
You lost :(
(17, 2, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((10, 10, False), 1, 0.0), ((12, 10, False), 1, 0.0), ((18, 10, False), 1, 0.0), ((20, 10, False), 0, 0.0)]
[((10, 8, False), 0, -1.0)]
[((18, 10, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
G = 0
is_visited = defaultdict(lambda: np.zeros(env.action_space.n))
for i,step in reversed(list(enumerate(episode))):
state, action, reward = step
G = reward + G*gamma
if state in is_visited:
if is_visited[state][action] == 0:
N[state][action] += 1
is_visited[state][action] =1
else:
is_visited[state][action] = 1
N[state][action] += 1
returns_sum[state][action] += G
for state in returns_sum:
for i,counter in enumerate(N[state]):
if counter != 0:
Q[state][i] = returns_sum[state][i]/counter
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_policy(env, policy):
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(2), p=policy[state]) if state in policy else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def policy_improvement(env, Q):
policy = defaultdict(lambda: np.ones(env.action_space.n)/env.action_space.n)
for state in Q:
max_idx = np.argmax(Q[state])
p = np.zeros(env.action_space.n)
p[max_idx] = 1
policy[state] = p
return policy
def policy_evaluation(env,Q,N,policy,episode_generator,alpha=1.0,gamma=1.0):
episode = episode_generator(env,policy)
G = 0
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
is_visited = defaultdict(lambda: np.zeros(env.action_space.n))
for i,step in reversed(list(enumerate(episode))):
state, action, reward = step
G = reward + G*gamma
if state in is_visited:
if is_visited[state][action] == 0:
N[state][action] += 1
is_visited[state][action] =1
else:
is_visited[state][action] = 1
N[state][action] += 1
returns_sum[state][action] += G
for state in returns_sum:
for i,counter in enumerate(N[state]):
if counter != 0:
Q[state][i] = Q[state][i] + alpha*(returns_sum[state][i] - Q[state][i])
return Q, N
def mc_control(env, num_episodes, alpha=1.0, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
N = defaultdict(lambda: np.zeros(env.action_space.n))
policy = defaultdict(lambda: np.ones(env.action_space.n)/env.action_space.n)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
Q, N = policy_evaluation(env, Q, N, policy, generate_episode_from_policy,alpha, gamma)
policy = policy_improvement(env, Q)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.03)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
policy = dict((k,np.argmax(v)) for k, v in Q.items())
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(14, 10, False)
End game! Reward: -1
You lost :(
(14, 8, False)
End game! Reward: -1.0
You lost :(
(20, 8, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((14, 2, False), 1, 0), ((20, 2, False), 0, 1.0)]
[((9, 2, False), 1, 0), ((16, 2, False), 1, 0), ((21, 2, False), 1, -1)]
[((13, 1, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episodes = generate_episode(env)
length_episodes = len(episodes)
_,_,rewards = zip(*episodes)
for i,episode in enumerate(episodes):
state,action,reward = episode
N[state][action] += 1
discount_rates = [gamma**k for k in range(i, length_episodes)]
forward_rewards = rewards[i:]
G = np.sum([i*j for i,j in zip(discount_rates,forward_rewards)])
returns_sum[state][action] += G
Q[state][action] = returns_sum[state][action]/N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
ep = 0.1
policy = defaultdict()
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generating episodes
state = env.reset()
episodes = []
while True:
best_action = np.argmax(Q[state])
probs = [(1-ep) if i== best_action else ep/(nA-1) for i in range(nA)]
action = np.random.choice(np.arange(nA), p=probs)
policy[state] = action
next_state, reward, done, info = env.step(action)
episodes.append((state, action, reward))
state = next_state
if done:
break
_,_,rewards = zip(*episodes)
length_episodes = len(episodes)
for i,episode in enumerate(episodes):
state,action,reward = episode
discount_rates = [gamma**k for k in range(i, length_episodes)]
forward_rewards = rewards[i:]
G = np.sum([i*j for i,j in zip(discount_rates,forward_rewards)])
Q[state][action] += alpha * (G - Q[state][action])
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.001)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
print(action, reward)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 1, False)
0 -1.0
End game! Reward: -1.0
You lost :(
(19, 2, False)
1 -1
End game! Reward: -1
You lost :(
(13, 7, False)
1 0
(18, 7, False)
0 0.0
End game! Reward: 0.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((21, 4, True), 1, 0), ((20, 4, False), 0, 1.0)]
[((18, 4, False), 1, -1)]
[((15, 1, False), 1, 0), ((21, 1, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.power(gamma, [i for i in range(len(episode)+1)])
seen = defaultdict(lambda: np.zeros(env.action_space.n))
for i, (state, action, reward) in enumerate(episode):
if seen[state][action] != 0:
continue
seen[state][action] += 1
discounted_rewards = np.sum(rewards[i:] * discounts[:-(i+1)])
N[state][action] += 1
returns_sum[state][action] += discounted_rewards
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_decay=0.999, eps_min=0.02):
nA = env.action_space.n
Q = defaultdict(lambda: np.zeros(nA))
epsilon = 1.0
for i_episode in range(1, num_episodes+1):
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# epsilon-greedy
epsilon = max(epsilon * eps_decay, eps_min)
episode = []
state = env.reset()
while True:
best_q_action_i = np.argmax(Q[state])
probs = np.where([i == best_q_action_i for i in range(nA)], 1 - epsilon + epsilon / nA, epsilon / nA)
action = np.random.choice(np.arange(nA), p=probs)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
states, actions, rewards = zip(*episode)
discounts = np.power(gamma, [i for i in range(len(episode)+1)])
for i, (state, action, _) in enumerate(episode):
discounted = np.sum(rewards[i:] * discounts[:-(i+1)])
Q[state][action] += alpha * (discounted - Q[state][action])
policy = defaultdict(lambda: 0)
for state, action_values in Q.items():
policy[state] = np.argmax(action_values)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k, np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(10, 2, False)
End game! Reward: -1.0
You lost :(
(11, 1, False)
(21, 1, False)
End game! Reward: -1
You lost :(
(14, 2, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((15, 9, True), 1, 0), ((18, 9, True), 1, 0), ((21, 9, True), 1, 0), ((21, 9, False), 0, 1.0)]
[((15, 2, False), 0, 1.0)]
[((13, 10, False), 1, 0), ((16, 10, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
episode = generate_episode(env)
reversed_episode = reversed(episode)
Gt = 0
first_visits = defaultdict(lambda: np.ones(env.action_space.n, dtype=bool))
for (state, action, reward) in reversed_episode:
first_visit = first_visits[state][action]
Gt = Gt + reward
if first_visit:
N[state][action] = N[state][action] + 1
returns_sum[state][action] = returns_sum[state][action] + Gt
first_visits[state][action] = False
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
for state in returns_sum.keys():
for action in range(env.action_space.n):
if N[state][action] != 0:
Q[state][action] = returns_sum[state][action]/N[state][action]
return Q
sample_Q = mc_prediction_q(env=env, num_episodes=3, generate_episode=generate_episode_from_limit_stochastic)
sample_Q
# obtain the corresponding state-value function
sample_V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in sample_Q.items())
sample_V_to_plot
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def action_probs(action_values, epsilon):
nA = len(action_values)
probs = np.full(nA, epsilon/nA)
probs[np.argmax(action_values)] += 1-epsilon
return probs
sum(action_probs([1, 3, 2], 0.1))
np.dot([1.0, 2.0], [1.0, 2.0]) == 5.0
print([1.0**i for i in range(5)])
print([0.9**i for i in range(5)])
print((1, 2, 3)[1:])
def generate_episode(env, Q, nA, epsilon):
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=action_probs(Q[state], epsilon))
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_epsilon(i_episode, num_episodes):
epsilon = 1/i_episode
if epsilon < 0.05:
return 0.04
else:
return epsilon
def greedy_policy(Q):
return dict((state,np.argmax(actions)) for state, actions in Q.items())
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = get_epsilon(i_episode, num_episodes)
episode = generate_episode(env=env, Q=Q, nA=nA, epsilon=epsilon)
states, actions, rewards = zip(*episode)
first_visits = defaultdict(lambda: np.ones(nA, dtype=bool))
discounts = [gamma**i for i in range(len(rewards)+1)]
for i, state in enumerate(states):
action = actions[i]
if first_visits[state][action]:
Gt = np.dot(list(rewards[i:]), discounts[:-i-1])
Q[state][action] = Q[state][action] + alpha*(Gt-Q[state][action])
first_visits[state][action] = False
policy = greedy_policy(Q)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
print([1, 2, 3][:-1])
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, num_episodes=600000, alpha=0.01)
###Output
Episode 600000/600000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game!')
print('State:', state)
print('Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(13, 10, False)
End game!
State: (13, 10, False)
Reward: -1.0
You lost :(
(10, 6, False)
(14, 6, False)
(19, 6, False)
End game!
State: (19, 6, False)
Reward: 1.0
You won :)
(7, 10, False)
(11, 10, False)
(19, 10, False)
End game!
State: (23, 10, False)
Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((14, 6, False), 1, 0), ((20, 6, False), 1, -1)]
[((11, 2, False), 1, 0), ((12, 2, False), 1, 0), ((18, 2, False), 1, -1)]
[((7, 10, False), 1, 0), ((17, 10, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# generate a episode
episode = generate_episode(env)
# get the state, action, reward
states, actions, rewards = zip(*episode)
# calculate discount
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:] * discounts[:-(i+1)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 1000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
# update policy corresponding to epsilon-greedy
if state in Q:
policy = np.ones(nA) * epsilon / nA
best_action = np.argmax(Q[state])
policy[np.argmax(Q[best_action])] = 1 - epsilon + epsilon / nA
# get action from policy
action = np.random.choice(np.arange(nA), p=policy[Q[state]])
else:
action = env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1, eps_decay=0.9999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# initialize epsilon
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# set the value of epsilon, from 1.0 decay to 0.05, then fixed to 0.05
epsilon = max(epsilon * eps_decay, eps_min)
# generate a episode
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# get the state, action, reward
states, actions, rewards = zip(*episode)
# calculate discount
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update Q
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha * (sum(rewards[i:] * discounts[:-(i+1)]) - old_Q)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.05)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack **with a random policy**. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(5):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 2, False)
End game! Reward: 1.0
You won :)
(19, 7, False)
End game! Reward: -1
You lost :(
(20, 1, False)
End game! Reward: -1
You lost :(
(12, 7, False)
End game! Reward: 1.0
You won :)
(20, 10, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
# for i in range(3):
episode=generate_episode_from_limit_stochastic(env)
print("Episode : ",episode,'\n')
states,actions,rewards=zip(*episode)
gamma=0.9
print('States: ',states)
print('Actions: ',actions)
print('Rewards: ',rewards)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
print('Discounts: ',discounts,'\n')
for i in range(4):
print('i: ',i,'Gt:: ',sum(rewards[i:]*discounts[:-(i+1)]))
# for i, state in enumerate(states):
# print('i:',i,'state:',state)
###Output
Episode : [((13, 6, False), 0, -1.0)]
States: ((13, 6, False),)
Actions: (0,)
Rewards: (-1.0,)
Discounts: [1. 0.9]
i: 0 Gt:: -1.0
i: 1 Gt:: 0
i: 2 Gt:: 0
i: 3 Gt:: 0
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
FV = defaultdict(lambda: np.zeros(env.action_space.n))
print(N)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode=generate_episode(env)
states,actions,rewards=zip(*episode)
discounts=np.array([gamma**i for i in range(len(rewards)+1)])
# print('States: ',states)
# print('Actions: ',actions)
# print('Rewards: ',rewards)
# print('Discounts: ',discounts,'\n')
# print(discounts[:])
for i,state in enumerate(states):
# print('i: ',i,'State: ',state)
returns_sum[state][actions[i]]+=sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]]+=1.0
Q[state][actions[i]]=returns_sum[state][actions[i]]/N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
defaultdict(<function mc_prediction_q.<locals>.<lambda> at 0x1a0e8b91e0>, {})
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.03):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 800000, 0.02)
###Output
Episode 800000/800000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(5):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(11, 9, False)
End game! Reward: -1.0
You lost :(
(12, 10, False)
End game! Reward: -1.0
You lost :(
(8, 7, False)
(18, 7, False)
End game! Reward: 1.0
You won :)
(12, 10, False)
End game! Reward: 1.0
You won :)
(15, 5, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((13, 2, False), 1, 0), ((14, 2, False), 1, 0), ((18, 2, False), 1, 0), ((19, 2, False), 0, 1.0)]
[((21, 10, True), 0, 1.0)]
[((17, 8, True), 1, 0), ((17, 8, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma ** i for i in range(len(rewards) + 1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
episode = generate_episode_from_limit_stochastic(env)
print(episode)
states, actions, rewards = zip(*episode)
print('\nstates: ', states)
print('\nactions: ', actions)
print('\nrewards: ', rewards)
###Output
[((19, 6, False), 0, 1.0)]
states: ((19, 6, False),)
actions: (0,)
rewards: (1.0,)
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 5, False)
End game! Reward: -1
You lost :(
(15, 5, False)
End game! Reward: -1
You lost :(
(15, 2, True)
(18, 2, True)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((21, 8, True), 1, 0), ((19, 8, False), 0, -1.0)]
[((20, 4, False), 1, -1)]
[((18, 5, True), 1, 0), ((17, 5, False), 1, 0), ((19, 5, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# Implement the every-visit algorithm
cumulated_reward = 0
episode = generate_episode(env)
for state, action, reward in episode[::-1]:
cumulated_reward = gamma * cumulated_reward + reward
returns_sum[state][action] += reward
N[state][action] += 1
for state, action_value in returns_sum.items():
for action, value in enumerate(action_value):
if N[state][action]:
Q[state][action] = returns_sum[state][action] / N[state][action]
else:
Q[state][action] = 0
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def get_probs(action_id, nA, eps):
if not action_id:
return [
1 / nA
for _ in range(nA)
]
else:
return [
1 - eps + eps / nA if idx == action_id
else eps / nA
for idx in range(nA)
]
def generate_episode_from_Q(bj_env, Q, policy, eps):
nA = bj_env.action_space.n
episode = []
state = bj_env.reset()
while True:
action = np.random.choice(
np.arange(nA),
p=get_probs(policy.get(state), nA, eps),
)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_policy(Q):
policy = dict()
for state, rewards in Q.items():
policy[state] = np.argmax(rewards)
return policy
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
eps = eps_start
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
policy = get_policy(Q)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# Implement the every-visit algorithm
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
cumulated_reward = 0
episode = generate_episode_from_Q(env, Q, policy, eps)
for state, action, reward in episode[::-1]:
cumulated_reward = gamma * cumulated_reward + reward
returns_sum[state][action] += reward
N[state][action] += 1
for state in returns_sum.keys():
for action in range(nA):
if N[state][action]:
value = returns_sum[state][action] / N[state][action]
Q[state][action] = (1 - alpha) * Q[state][action] + alpha * value # update with alpha
policy = get_policy(Q)
# Update eps
eps = max(eps * eps_decay, eps_min)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(19, 5, False)
End game! Reward: 1.0
You won :)
(8, 2, False)
End game! Reward: 1.0
You won :)
(10, 3, False)
(17, 3, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((9, 1, False), 1, 0.0), ((17, 1, False), 1, -1.0)]
[((12, 10, False), 1, 0.0), ((15, 10, False), 0, -1.0)]
[((14, 7, False), 1, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# Run one episode and split the result in three iterable items
states,actions,rewards = zip(*generate_episode(env))
# Calculate discounts
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# Find the every-visit MC updating the default dictionaries
for i,state in enumerate(states):
# Update the count
N[state][actions[i]] += 1.0
# Update the sum
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(i+1)])
# Update Q with the reward, Q is an average
Q[state][actions[i]] = returns_sum[state][actions[i]]/N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, eps_0=0.5, eps_decay=0.99999, eps_min=0.05, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
policy = defaultdict(lambda: 0)
eps = eps_0
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# Run one episode and split the result in three iterable items
states,actions,rewards = [],[],[]
state = env.reset()
# Update eps
eps = eps_decay*eps
if eps <eps_min:
eps = eps_min
while True:
# Calculate probabilities
probs = list(np.ones(nA) * (eps/(nA-1)))
probs[policy[state[0]]]=1-eps
# Run one step
action = np.random.choice(np.arange(nA), p=probs)
next_state, reward, done, info = env.step(action)
states.append(state)
actions.append(action)
rewards.append(reward)
state = next_state
if done:
break
# Calculate discounts
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# Find the every-visit MC updating the default dictionaries
for i,state in enumerate(states):
# Update the sum
G = sum(rewards[i:]*discounts[:-(i+1)])
# Update Q
Q[state][actions[i]] += alpha*( G-Q[state][actions[i]] )
# Create a greedy policy
for state,actions in Q.items():
policy[state] = np.argmax(Q[state])
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 100000, 0.04)
###Output
Episode 100000/100000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(6):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
print(action)
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 8, False)
1
End game! Reward: -1.0
You lost :(
(16, 6, False)
0
End game! Reward: 1.0
You won :)
(19, 6, True)
0
End game! Reward: -1.0
You lost :(
(10, 7, False)
0
End game! Reward: -1.0
You lost :(
(18, 6, False)
1
(19, 6, False)
1
End game! Reward: -1.0
You lost :(
(12, 10, False)
1
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((16, 3, False), 1, 0.0), ((19, 3, False), 0, 1.0)]
[((15, 5, False), 1, 0.0), ((17, 5, False), 1, -1.0)]
[((20, 3, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
episode = generate_episode_from_limit_stochastic(env)
episode
s, a, r = zip(*episode)
s
a
r
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# creare episodio
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.05)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(17, 10, False)
End game! Reward: -1
You lost :(
(15, 4, True)
End game! Reward: 1.0
You won :)
(13, 2, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((19, 10, False), 0, -1.0)]
[((21, 10, True), 0, 1.0)]
[((19, 6, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
# Every visit Monte Carlo
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
cumulated_reward = 0
for step_i in reversed(range(len(episode))):
step = episode[step_i]
state = step[0]
action = step[1]
reward = step[2]
cumulated_reward += reward * (gamma ** step_i)
returns_sum[state][action] += cumulated_reward
N[state][action] += 1
for state in N.keys():
for action in range(len(N[state])):
if N[state][action] > 0:
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 50000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 50000/50000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
import random
def generate_episode_from_policy(bj_env, policy, epsilon=0.2):
episode = []
state = bj_env.reset()
done = False
while not done:
if random.random() > epsilon:
action = policy[state]
else:
action = np.random.choice(np.arange(bj_env.action_space.n))
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
return episode
def mc_control(env, num_episodes, alpha=0.01, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
policy = defaultdict(lambda: np.random.choice(np.arange(nA)))
epsilon = 1
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{} epsilon {}.".format(i_episode, num_episodes, epsilon), end="")
sys.stdout.flush()
epsilon *= 0.99995
new_Q = mc_prediction_q(env, 1, lambda env_gen : generate_episode_from_policy(env_gen, policy, epsilon), gamma)
for state in new_Q.keys():
for action in range(len(new_Q[state])):
Q[state][action] += alpha * (new_Q[state][action] - Q[state][action])
for state in Q.keys():
policy[state] = np.argmax(Q[state])
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 100000)
###Output
Episode 100000/100000 epsilon 0.006737441652362722.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(21, 10, True)
(16, 10, False)
End game! Reward: -1.0
You lost :(
(12, 1, False)
End game! Reward: -1.0
You lost :(
(13, 10, False)
(15, 10, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((8, 3, False), 1, 0.0), ((12, 3, False), 1, 0.0), ((18, 3, False), 1, 0.0), ((19, 3, False), 0, 0.0)]
[((13, 5, False), 1, 0.0), ((16, 5, False), 0, 1.0)]
[((11, 6, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate an episode
episode = generate_episode(env)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
# initialize empty dictionary of arrays
nA = env.action_space.n
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
epsilon = eps_start
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, num_episodes=100000, alpha=0.02)
###Output
Episode 100000/100000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo Methods[몬테카를로 gitbook](https://dnddnjs.gitbooks.io/rl/content/mc_prediction.html) In this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
import random
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment. ![](ruleofblackjack.png)
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(14, 2, False)
End game! Reward: -1.0
You lost :(
(13, 4, False)
End game! Reward: 1.0
You won :)
(17, 7, False)
End game! Reward: 0.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode += [(state, action, reward)]
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((17, 2, False), 1, -1.0)]
[((20, 7, False), 0, 1.0)]
[((18, 5, True), 1, 0.0), ((17, 5, False), 0, 0.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.![](./images/firstvisitMC.png)![](./images/everyvisitMC.png)Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.![](./images/pcfirstvisitMC.png)
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states) :
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(i+1)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
episode = generate_episode_from_limit_stochastic(env)
states, actions, rewards = zip(*episode)
print(f"list(zip(*episode)) : {list(zip(*episode))}\n")
print(f"states, actions, rewards : {states}, {actions}, {rewards}")
normal_dict = dict()
default_dict = defaultdict(int)
print(default_dict[0])
###Output
0
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._) ![](./images/pcconstantalpha.png)
###Code
import random
def get_probs(Q_s, eps, nA) :
policy_s = np.ones(nA)*eps / nA
policy_s[np.argmax(Q_s)] = 1 - eps + eps/nA
return policy_s
def generate_episode_from_q(env, Q_s, eps) :
episode = []
state = env.reset()
nA = env.action_space.n
while True :
action = np.random.choice(np.arange(nA), p=get_probs(Q_s, eps, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode += [(state, action, reward)]
state = next_state
if done :
break
return episode
def update_Q(episode, Q, alpha, gamma) :
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states) :
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(discounts[:-(i+1)] * rewards[i:]) - old_Q)
return Q
import random
print(random.random())
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
discounts = np.array([gamma**i for i in range(100)])
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate episode from epsilon greedy.
eps = max(eps_start*eps_decay**i_episode, eps_min)
episode = generate_episode_from_q(env, Q, eps)
Q = update_Q(episode, Q, alpha, gamma)
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
x = 1 if False else 2
print(x)
i = np.argmin([1,2,3,4,5])
print(i)
###Output
0
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
help(env.step)
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(13, 10, False)
End game! Reward: -1.0
You lost :(
(20, 7, False)
End game! Reward: -1
You lost :(
(12, 7, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
# Player has 2+Ace=13 for step 1. 2+8+Ace for step 2.
###Output
[((12, 7, False), 1, -1)]
[((17, 10, False), 1, -1)]
[((13, 3, False), 1, 0), ((17, 3, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode_from_limit_stochastic(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards))])
for i, state in enumerate(states):
N[state][actions[i]] += 1
# G_t = R_t+1 + gamma*R_t+2 + .. So be careful of the indices.
# summing the state action values
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:(len(rewards)-i)])
# average by counts of state-action pair
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
if state in Q:
probs = np.ones(nA) * epsilon / nA
best_action = np.argmax(Q[state])
probs[best_action] += 1 - epsilon
action = np.random.choice(np.arange(2), p=probs)
else:
action = env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = 1.0
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(epsilon*0.9, 0.05)
episode = generate_episode_from_Q(env, Q, epsilon, nA)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards))])
for i, state in enumerate(states):
G_t = sum(rewards[i:]*discounts[:(len(rewards)-i)])
Q[state][actions[i]] = (1- alpha) * Q[state][actions[i]] + alpha * G_t
policy = defaultdict(lambda: int)
for key in Q.keys():
policy[key] = np.argmax(Q[key])
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.1)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(21, 10, True)
End game! Reward: 1.0
You won :)
(16, 10, False)
End game! Reward: -1.0
You lost :(
(19, 6, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((14, 10, False), 1, 0.0), ((19, 10, False), 0, -1.0)]
[((14, 10, False), 1, 0.0), ((16, 10, False), 1, -1.0)]
[((16, 10, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
visit = set()
for i, state in enumerate(states):
if state not in visit:
visit.add(state)
N[state][actions[i]] += 1
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
for key in returns_sum.keys():
Q[key] = returns_sum[key]/N[key]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
#Q = mc_prediction_q(env, 500, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def get_prob_from_Q(probs, eps, nA):
best_A = np.argmax(probs)
output = np.ones(nA) * eps / nA
output[best_A] = 1 - eps + eps/nA
return output
def generate_episode_from_Q(env, Q, eps, nA):
episode = []
state = env.reset()
while True:
probs = get_prob_from_Q(Q[state], eps, nA)
action = np.random.choice(np.arange(nA), p=probs) if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(epsilon*eps_decay, eps_min)
episode = generate_episode_from_Q(env, Q, epsilon, nA)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
visit = set()
for i, state in enumerate(states):
if state not in visit:
visit.add(state)
Q[state][actions[i]] = Q[state][actions[i]] + alpha*(sum(rewards[i:]*discounts[:-(1+i)])-Q[state][actions[i]])
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 50000, 0.02)
###Output
Episode 50000/50000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
#print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
print(state)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(18, 9, False)
End game! Reward: -1.0
You lost :(
(17, 3, False)
End game! Reward: 0.0
You lost :(
(21, 7, True)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((12, 10, False), 1, 0), ((21, 10, False), 0, 1.0)]
[((20, 2, False), 0, 1.0)]
[((6, 10, False), 1, 0), ((9, 10, False), 1, 0), ((14, 10, False), 1, 0), ((18, 10, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate an episode
episode = generate_episode(env)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(19, 6, False)
End game! Reward: -1.0
You lost :(
(21, 4, True)
(21, 4, False)
End game! Reward: 1.0
You won :)
(19, 6, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
generate_episode_from_limit_stochastic(env)
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
ep = generate_episode(env)
s, a, r = zip(*ep)
discounts = np.array([gamma**i for i in range(len(s))])
for ts in range(len(s)):
returns_sum[s[ts]][a[ts]] += sum(r[ts:] * discounts[ts:])
N[s[ts]][a[ts]] += 1
Q[s[ts]][a[ts]] = returns_sum[s[ts]][a[ts]] / N[s[ts]][a[ts]]
return Q
mc_prediction_q(env, 1, generate_episode_from_limit_stochastic, gamma=0.9)
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 100000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 100000/100000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_greedy_epsilon(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
best_a = np.argmax(Q[state])
policy_a_s = np.ones(nA) * epsilon / nA
policy_a_s[best_a] = 1 - epsilon + epsilon / nA
action = np.random.choice(np.arange(nA), p=policy_a_s)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_min=.1):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epsilon = max(np.exp(-i_episode/5000), eps_min)
ep = generate_episode_from_greedy_epsilon(env, Q, epsilon, nA)
## TODO: complete the function
s, a, r = zip(*ep)
discounts = np.array([gamma**i for i in range(len(s))])
for ts in range(len(s)):
est_tot_return = sum(r[ts:] * discounts[ts:])
Q[s[ts]][a[ts]] = Q[s[ts]][a[ts]] + alpha * (est_tot_return - Q[s[ts]][a[ts]])
policy = dict((k,np.max(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, num_episodes=50000, alpha=.02, gamma=1, eps_min=.2)
###Output
Episode 50000/50000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
The **true** optimal policy $\pi_*$ can be found in Figure 5.2 of the [textbook](http://go.udacity.com/rl-textbook) (and appears below). Compare your final estimate to the optimal policy - how close are you able to get? If you are not happy with the performance of your algorithm, take the time to tweak the decay rate of $\epsilon$, change the value of $\alpha$, and/or run the algorithm for more episodes to attain better results.![True Optimal Policy](images/optimal.png)
###Code
policy
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(15, 1, False)
End game! Reward: -1.0
You lost :(
(15, 10, False)
End game! Reward: -1.0
You lost :(
(8, 6, False)
(17, 6, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((17, 3, False), 1, -1.0)]
[((15, 6, False), 0, -1.0)]
[((6, 6, False), 1, 0.0), ((16, 6, False), 1, 0.0), ((17, 6, False), 1, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
# first visit
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes + 1):
# monitor progress
if i_episode % 1000 == 0:
print(f"\rEpisode {i_episode}/{num_episodes}.", end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
visited = defaultdict(lambda: np.zeros(env.action_space.n))
discounts = np.array([gamma**i for i in range(len(rewards))])
for t in range(len(episode)):
s, a = states[t], actions[t]
if visited[s][a] == 0:
visited[s][a] = 1
N[s][a] += 1
returns_sum[s][a] += np.sum(discounts[:len(rewards) - t] * rewards[t:])
Q[s][a] = returns_sum[s][a] / N[s][a]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0] > 18) * (np.dot([0.8, 0.2], v)) + (k[0] <= 18) * (np.dot([0.2, 0.8], v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
"""Generate an episode from following the epsilon-greedy policy"""
episode = []
state = env.reset()
while True:
action = np.random.choice(range(nA), p=get_probs(Q[state], epsilon, nA)) if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
"""Obtain the action probabilities corresponding to epsilon-greedy policy"""
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] += 1 - epsilon
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
"""Update the action-value function estimate using the most recent episode"""
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards))])
visited = defaultdict(lambda: np.zeros(env.action_space.n))
for t in range(len(states)):
s, a = states[t], actions[t]
if visited[s][a] == 0:
G = sum(rewards[t:] * discounts[:len(rewards) - t])
Q[s][a] = Q[s][a] + alpha *(G - Q[s][a])
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes + 1):
# monitor progress
if i_episode % 1000 == 0:
print(f"\rEpisode {i_episode}/{num_episodes}.", end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = min(epsilon * eps_decay, eps_min)
episode = generate_episode_from_Q(env, Q, epsilon, nA)
Q = update_Q(env, episode, Q, alpha, gamma)
policy = dict((k, np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(8, 10, False)
End game! Reward: -1.0
You lost :(
(18, 1, False)
End game! Reward: -1.0
You lost :(
(15, 8, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((15, 6, False), 1, -1)]
[((18, 6, False), 1, 0), ((21, 6, False), 0, 1.0)]
[((12, 10, False), 1, 0), ((13, 10, False), 1, 0), ((16, 10, False), 1, 0), ((17, 10, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# first visit monte carlo solution
for i, state in enumerate(states):
first_occurence_idx = next(i for i,x in enumerate(episode) if x[0] == state)
returns_sum[state][actions[i]] += sum(rewards[first_occurence_idx:]*discounts[:-(1+first_occurence_idx)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
# Every visit monte carlo
# for i, state in enumerate(states):
# returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
# N[state][actions[i]] += 1.0
# Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# Q = mc_prediction_q(env, 5, generate_episode_from_limit_stochastic, 0.99)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
action = env.action_space.sample()
print(f'(St, At) = ({state}, {action})')
state, reward, done, info = env.step(action)
print(f'(St+1, R) = ({state}, {reward})')
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(St, At) = ((19, 4, False), 1)
(St+1, R) = ((24, 4, False), -1.0)
End game! Reward: -1.0
You lost :(
(St, At) = ((11, 9, False), 1)
(St+1, R) = ((16, 9, False), 0.0)
(St, At) = ((16, 9, False), 1)
(St+1, R) = ((19, 9, False), 0.0)
(St, At) = ((19, 9, False), 0)
(St+1, R) = ((19, 9, False), 0.0)
End game! Reward: 0.0
You lost :(
(St, At) = ((14, 7, True), 1)
(St+1, R) = ((20, 7, True), 0.0)
(St, At) = ((20, 7, True), 0)
(St+1, R) = ((20, 7, True), 1.0)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
# Note: Following the policy of 80% choosing At=stick with pi(At=stick|St) = 0.8, 0.2 otherwise
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(10):
print(str(i + 1) + ': ', generate_episode_from_limit_stochastic(env))
###Output
1: [((16, 10, True), 1, 0.0), ((19, 10, True), 1, 0.0), ((16, 10, False), 1, -1.0)]
2: [((13, 7, False), 1, -1.0)]
3: [((11, 7, False), 1, 0.0), ((21, 7, False), 0, 1.0)]
4: [((6, 6, False), 1, 0.0), ((14, 6, False), 1, 0.0), ((17, 6, False), 1, -1.0)]
5: [((7, 7, False), 1, 0.0), ((17, 7, False), 1, -1.0)]
6: [((12, 10, False), 1, -1.0)]
7: [((13, 10, False), 1, 0.0), ((14, 10, False), 1, 0.0), ((16, 10, False), 1, -1.0)]
8: [((8, 9, False), 1, 0.0), ((19, 9, True), 0, 0.0)]
9: [((12, 9, True), 1, 0.0), ((12, 9, False), 0, 1.0)]
10: [((11, 10, False), 1, 0.0), ((21, 10, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
np.seterr(divide='ignore', invalid='ignore')
def add_rewards(t):
return t[-1]
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# Note: Initialize empty dictionaries of arrays of shape n
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes + 1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
episode = generate_episode(env)
visited = []
for j in range(0, len(episode)):
# episode[j] = ((18, 7, False), 1, 0.0)
s, a, r = episode[j]
# Is it the first time we visit this St=s
if s not in visited:
visited.append(s)
N[s][a] += 1.0
d = 0
for jj in range(j, len(episode)):
reward = episode[j][-1]
returns_sum[s][a] += (gamma**d) * reward
d += 1
Q[s][a] += returns_sum[s][a]
# Compute averages for each action in each state
for s, _ in Q.items():
for a in range(Q[s].shape[0]):
if N[s][a] > 0:
Q[s][a] = Q[s][a] / N[s][a]
return Q
def mc_prediction_q_v2(env, num_episodes, generate_episode, gamma=1.0):
# Note: Initialize empty dictionaries of arrays of shape n
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n).astype(np.float32))
for i in range(1, num_episodes + 1):
episode = generate_episode(env)
# Get all the states, actions, rewards
states, actions, rewards = zip(*episode)
for t in range(0, len(states)):
s = states[t]; a = actions[t]
# First-visit (check passed states to see if current state has been gone through)
if s not in states[: t]:
# Compute cummulative rewards from St and on
# Gt = Rt+1 + jRt+2 + j**2Rt+3 + ... + j**k-1Rt+k
returns_sum[s][a] += sum([ (gamma**j) * rewards[idx] for j, idx in enumerate(range(t, len(rewards))) ])
N[s][a] += 1.0
Q[s][a] = returns_sum[s][a] / N[s][a]
return Q
_DEBUG_ = False
if _DEBUG_:
nepisodes = 1
Q = mc_prediction_q_v2(env, nepisodes, generate_episode_from_limit_stochastic)
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
nepisodes = 5000
Q = mc_prediction_q_v2(env, nepisodes, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k, (k[0] > 18) * (np.dot([0.8, 0.2], v)) + (k[0] <= 18) * (np.dot([0.2, 0.8], v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode(env, nA, policy):
if env is None:
raise('Environment has not been initialized!')
# init the env
episode = []
state = env.reset()
is_terminal = False
while not is_terminal:
# sample for action (use the updated policy if we have such state in there)
# 0 or 1, based on p(a|s)
action = None
if state not in policy:
action = env.action_space.sample()
else:
# Pick action with eps-soft policy
action = np.random.choice(np.arange(nA), p=policy[state])
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
is_terminal = True
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# Policy: state -> (p_a1, p_a2) -> max(p_a1, p_a2)
policy = defaultdict(lambda: np.ones(nA))
Q = defaultdict(lambda: np.zeros(nA))
eps = 0.9 # At the start, we want to explore a little more
for i_episode in range(1, num_episodes + 1):
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
eps /= i_episode
episode = generate_episode(env, nA, policy)
T = len(episode)
states, actions, rewards = zip(*episode)
for t in range(0, T):
s = states[t]; a = actions[t]
if s not in states[: t]:
Gt = sum([ (gamma**d) * rewards[idx] for d, idx in enumerate(range(t, T)) ])
Q[s][a] += alpha * (Gt - Q[s][a])
# Update the policy: Pick the most likely action of currently estimated state-action value function for the current policy
a_max = np.argmax(Q[s])
# Check if the currently picked action is the most likely and update the policy pair
policy[s] = np.ones(nA) * (eps / nA)
policy[s][a_max] = ((1 - eps) + (eps / nA))
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
num_episodes = 50000
# At the start, we rely more on the new return, gradually decaying to alpha = 0.1
alpha = 0.02
policy, Q = mc_control(env, num_episodes, alpha)
###Output
Episode 50000/50000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
#plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(21, 10, True)
End game! Reward: 1.0
You won :)
(14, 10, False)
End game! Reward: -1.0
You lost :(
(21, 6, True)
(21, 6, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((13, 8, False), 1, 0), ((18, 8, False), 1, -1)]
[((12, 10, False), 1, -1)]
[((15, 6, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) `{s0 : [v_a0, v_a1, ...], s1 : [v_a0, v_a1, ...], ...}` where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.The pseudocode for the _first-visit MC prediction_ is:There are three relevant tables (implemented as dictionaries): * $Q$ - $Q$-table, with a row for each state and a column for each action. The entry corresponding to state $s$ and action $a$ is denoted $Q(s,a)$. * $N$ - $N$-table that keeps track of the number of first visits we have made to each state-action pair. * $returns\_sum$ - table that keeps track of the sum of the rewards obtained after first visits to each state-action pair. In the algorithm, the number of episodes the agent collects is equal to $num\_episodes$. After each episode, $N$ and $returns\_sum$ are updated to store the information contained in the episode. Then, after all of the episodes have been collected and the values in $N$ and $returns\_sum$ have been finalized, we quickly obtain the final estimate for $Q$.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n)) # default items are s : [0, 0]
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
#███╗ ██╗ ██████╗ ████████╗███████╗
#████╗ ██║██╔═══██╗╚══██╔══╝██╔════╝
#██╔██╗ ██║██║ ██║ ██║ █████╗
#██║╚██╗██║██║ ██║ ██║ ██╔══╝
#██║ ╚████║╚██████╔╝ ██║ ███████╗
#╚═╝ ╚═══╝ ╚═════╝ ╚═╝ ╚══════╝
# http://patorjk.com/software/taag/#p=display&f=ANSI%20Shadow&t=NOTE
#
# 1. This is a non-Pythonic implementation of non-discount
# return-at-game-end first-visit MC prediction for BJ.
# See the next code cell with the official solution.
## TODO: complete the function
# - generate episode
episode = generate_episode(env)
# sample output:
# [((13, 8, False), 1, 0), ((18, 8, False), 1, -1)]
# [((12, 10, False), 1, -1)]
# [((15, 6, False), 1, -1)]
# Note: For BJ, first-visit and every-visit is the same as the same
# state never repeats, per the rules of the game. (TODO: Verify)
# - for each first visit in the episode, update N and returns_sum
G_episode = episode[-1][2]
for e in episode: # Assumes only first visits! (See note above)
N[e[0]][e[1]] = N[e[0]][e[1]] + 1 # N[s][a] = N[s][a] + 1
returns_sum[e[0]][e[1]] = returns_sum[e[0]][e[1]] + G_episode
## TODO
# - fill out Q
for s in N.keys():
Q[s] = returns_sum[s]/N[s] # itemwise division [r_stick/n_stick, r_hit/n_hit]
return Q
###Output
_____no_output_____
###Markdown
Now, let's study the prescribed implementation:
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
#███╗ ██╗ ██████╗ ████████╗███████╗
#████╗ ██║██╔═══██╗╚══██╔══╝██╔════╝
#██╔██╗ ██║██║ ██║ ██║ █████╗
#██║╚██╗██║██║ ██║ ██║ ██╔══╝
#██║ ╚████║╚██████╔╝ ██║ ███████╗
#╚═╝ ╚═══╝ ╚═════╝ ╚═╝ ╚══════╝
# http://patorjk.com/software/taag/#p=display&f=ANSI%20Shadow&t=NOTE
#
# 1. Because for BJ first-visit and every-visit are equivalent,
# possibly due to the fact that an episode does not have
# more than one instance of the same (s, a, •) tuple (verify),
# this implementation is for every-visit MC prediction.
# generate an episode
episode = generate_episode(env) # !!: a list of (s, a, r) tuples
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode) # !!: elegant Pythonic episode unpacking
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)]) # !!: first one is not discounted (gamma=1)
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states): # !!: every-visit MC: i indexes the actions
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)]) # !!: for general return
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
print(env.action_space.n)
d = defaultdict(lambda: np.zeros(env.action_space.n)) # items are s: [0, 0]
for i in d.items():
print(i)
e = [((13, 8, False), 1, 0), ((18, 8, False), 1, -1)]
states, actions, rewards = zip(*e)
print(states, actions, rewards)
###Output
((13, 8, False), (18, 8, False)) (1, 1) (0, -1)
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Part 2.1: Epsilon-greedyA greedy policy does only exploitation, always choosing the action resulting in the highest return. Such a policy may fail to explore enough of its environment and may thus result in the agent getting stuck in a local maximum. That is, greedy policies may be *suboptimal*.An $\epsilon$-greedy policy does mostly exploitation, but occasionally it explores. That is, the policy mostly picks the current highest-return action (*greedy* action), but occasionally will pick a (currently) suboptimal action and expand its knowledge of the environment.You can think of the agent who follows an $\epsilon$-greedy policy as always having a (potentially unfair) coin at its disposal, with probability $\epsilon$ of landing heads. After observing a state, the agent flips the coin.- If the coin lands tails (so, with probability $1-\epsilon$), the agent selects the greedy action.- If the coin lands heads (so, with probability $\epsilon$), the agent selects an action uniformly at random from the set of available (non-greedy AND greedy) actions.In order to construct a policy $\pi$ that is $\epsilon$-greedy with respect to the current action-value function estimate $Q$, we will set$$\pi(a|s) \longleftarrow \begin{cases} \displaystyle a^* = argmax_a\:Q(a), & \textrm{with probability } 1-\epsilon\\ \displaystyle \textrm{a random action,} & \textrm{with probability } \epsilon \end{cases}$$for each $s\in\mathcal{S}$ and $a\in\mathcal{A}(s)$. Since in this expression of our policy, both alternatives contain the greedy action, for computational purposes, it is more convenient to state the probabilities with which each of the available actions will be picked:$$\pi(a|s) \longleftarrow \begin{cases} \displaystyle a^* = arg\:max_a\:Q(a), & \textrm{with probability } 1-\epsilon+{\epsilon / |\mathcal{A}(s)|}\\ \displaystyle \textrm{and all remaining } k-1 \textrm{ actions,} & \textrm{with equal probability of } {\epsilon / |\mathcal{A}(s)|} \end{cases}$$for each $s\in\mathcal{S}$ and $a\in\mathcal{A}(s)$. This expression will allow us to set the action probabilities when generating the episodes according to our policy. Part 2.2: Greedy in the Limit with Infinite Exploration (GLIE)In order to guarantee that MC control converges to the optimal policy $\pi$, we need to ensure that two conditions are met. We refer to these conditions as **Greedy in the Limit with Infinite Exploration (GLIE)**. In particular, if: * every state-action pair $s, a$ (for all $s\in\mathcal{S}$ and $a\in\mathcal{A}(s)$) is visited infinitely many times, and * the policy converges to a policy that is greedy with respect to the action-value function estimate $Q$, then MC control is guaranteed to converge to the optimal policy (in the limit as the algorithm is run for *infinitely many episodes*). These conditions ensure that: * the agent continues to explore for all time steps, and * the agent gradually **exploits** more (and **explores** less). One way to satisfy these conditions is to modify the value of $\epsilon$ when specifying an $\epsilon$-greedy policy. In particular, let $\epsilon_i$ correspond to the $i$-th time step. Then, both of these conditions are met if: * $\epsilon_i > 0$ for all time steps $i$, and * $\epsilon_i$ decays to zero in the limit as the time step $i$ approaches infinity (that is, $\lim_{i\to\infty} \epsilon_i = 0$).For example, to ensure convergence to the optimal policy, we could set $\epsilon_i = \frac{1}{i}$. (You are encouraged to verify that $\epsilon_i > 0$ for all $i$, and $\lim_{i\to\infty} \epsilon_i = 0$.) Part 2.3: Incremental meanIn our current algorithm for Monte Carlo control, we collect a large number of episodes to build the $Q$-table (as an estimate for the action-value function corresponding to the agent's current policy). Then, after the values in the $Q$-table have converged, we use the table to come up with an improved policy.Maybe it would be more efficient to update the $Q$-table *after every episode*. Then, the updated $Q$-table could be used to improve the policy. That new policy could then be used to generate the next episode, and so on.The pseudocode of the first-visit GLIE MC control is:There are two relevant tables: * $Q$ - $Q$-table, with a row for each state and a column for each action. The entry corresponding to state $s$ and action $a$ is denoted $Q(s,a)$. * $N$ - table that keeps track of the number of first visits we have made to each state-action pair. The number of episodes the agent collects is equal to $num\_episodes$.The algorithm proceeds by looping over the following steps: 1. The policy $\pi$ is improved to be $\epsilon$-greedy with respect to $Q$, and the agent uses $\pi$ to collect an episode. 2. $N$ is updated to count the total number of first visits to each state action pair. 3. The estimates in $Q$ are updated to take into account the most recent information. The update formula is as follows:$$Q(S_t, A_t) \leftarrow Q(S_t, A_t) + {1 \over N(S_t, A_t)} (G_t - Q(S_t, A_t))$$In this way, the agent is able to improve the policy after every episode! Part 2.4: Constant alphaHowever, the term ${1 \over N(S_t, A_t)}$ diminishes with the number of visits, causing vanishing update weights for later visits. To alleviate this, we can use a constant weight $\alpha$ as in the following update formula:$$Q(S_t, A_t) \leftarrow Q(S_t, A_t) + \alpha (G_t - Q(S_t, A_t))$$or, alternatively, to show the contribution of the current value estimate and the return,$$Q(S_t,A_t) \leftarrow (1-\alpha)Q(S_t,A_t) + \alpha G_t$$The pseudocode of the first-visit constant-$\alpha$ GLIE MC control is:(_Feel free to define additional functions to help you to organize your code._) Part 2.5: ImplementationYour algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
"""Generate an episode using a GLIA policy wrt the current
action-value table Q, with given epsilon and an action
space with size nA"""
episode = []
state = env.reset()
while True:
#███╗ ██╗ ██████╗ ████████╗███████╗
#████╗ ██║██╔═══██╗╚══██╔══╝██╔════╝
#██╔██╗ ██║██║ ██║ ██║ █████╗
#██║╚██╗██║██║ ██║ ██║ ██╔══╝
#██║ ╚████║╚██████╔╝ ██║ ███████╗
#╚═╝ ╚═══╝ ╚═════╝ ╚═╝ ╚══════╝
# http://patorjk.com/software/taag/#p=display&f=ANSI%20Shadow&t=NOTE
#
# 1. Because Q is a dictionary, and this might be the first visit of
# 'state', it is necesssary to add 'if state in Q' and return a
# uniformly-sampled action if 'state' has not been visited before.
action = np.random.choice(np.arange(nA), p=epsilon_greedy_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def epsilon_greedy_probs(Q_s, epsilon, nA):
"""Returns a list of the probabilities of nA ordered actions
Q_s for the implementation of an epsilon greedy policy,
at a particular state s. The action with the highest value
in Q_s is chosen as the greedy action a* and its probability
is set at 1-e+e/nA. The probabilities for all other actions
are set at e/nA."""
policy_probs = np.ones(nA) * epsilon / nA
arg_a_star = np.argmax(Q_s)
policy_probs[arg_a_star] = 1 - epsilon + epsilon / nA
return policy_probs
def update_Q_from_policy(episode, Q, gamma, alpha):
"""Update the Q table with constant alpha picked to strongly
favor the current Q[state] values instead of the long-term
cumulative return. """
states, actions, rewards = zip(*episode) # !!: elegant Pythonic episode unpacking
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)]) # !!: first one is not discounted (gamma=1)
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states): # !!: every-visit MC: i indexes the actions
Q_sa_old = Q[state][actions[i]]
G = sum(rewards[i:]*discounts[:-(1+i)])
#███╗ ██╗ ██████╗ ████████╗███████╗
#████╗ ██║██╔═══██╗╚══██╔══╝██╔════╝
#██╔██╗ ██║██║ ██║ ██║ █████╗
#██║╚██╗██║██║ ██║ ██║ ██╔══╝
#██║ ╚████║╚██████╔╝ ██║ ███████╗
#╚═╝ ╚═══╝ ╚═════╝ ╚═╝ ╚══════╝
# http://patorjk.com/software/taag/#p=display&f=ANSI%20Shadow&t=NOTE
#
# 1. The alpha removes the normalization step from MC_prediction. The
# counts in N are not even tracked
Q[state][actions[i]] = Q_sa_old + alpha * (G - Q_sa_old)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodaes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
#███╗ ██╗ ██████╗ ████████╗███████╗
#████╗ ██║██╔═══██╗╚══██╔══╝██╔════╝
#██╔██╗ ██║██║ ██║ ██║ █████╗
#██║╚██╗██║██║ ██║ ██║ ██╔══╝
#██║ ╚████║╚██████╔╝ ██║ ███████╗
#╚═╝ ╚═══╝ ╚═════╝ ╚═╝ ╚══════╝
# http://patorjk.com/software/taag/#p=display&f=ANSI%20Shadow&t=NOTE
#
# 1. Value of epsilon = e^(-(x+1.2*10^4)/(10^5))+0.1, x = i_episode.
# Estimated number of episodes necessary for convergence to the
# optimal policy is 500,000. Epsilon starts at just under 1.0 and
# converges asymptotically to 0.1. This ensures GLIA.
#
# 1. Set epsilon for this episode (starts under 1.0 and decays asymp to 0.1)
epsilon = np.e**(-(i_episode + 1.2 * 10**4) / (10**5)) + 0.1
# 2. Set policy to be epsilon-greedy wrt Q (starts out randomly initialized)
# Only 2 actions, so: greedy action with probability 1-e+e/2, other with probability e/2
# 3. Generate an episode with epsilon-greedy policy
# The environment does most of this
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# 4. Update Q table with constant alpha
# This includes the discount coefficient gamma
Q = update_Q_from_policy(episode, Q, gamma, alpha)
# 5. Extract the policy from Q to return separately
policy = { k: np.argmax(v) for k, v in Q.items() }
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(18, 4, True)
End game! Reward: -1.0
You lost :(
(17, 6, False)
End game! Reward: 1.0
You won :)
(14, 3, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((20, 6, False), 0, 1.0)]
[((18, 9, False), 1, -1.0)]
[((21, 10, True), 1, 0.0), ((13, 10, False), 1, 0.0), ((16, 10, False), 1, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
visit = defaultdict(lambda: np.zeros(env.action_space.n))
for i, (state, action, reward) in enumerate(episode):
if visit[state][action] == 0:
visit[state][action] = 1
N[state][action] = N[state][action] + 1
episode_reward = 0
for j in range(len(episode) - i):
episode_reward = episode_reward + (gamma**j) * episode[i+j][2]
returns_sum[state][action] = returns_sum[state][action] + episode_reward
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def epsilon_greedy(state, Q, epsilon, nA):
a_max = np.argmax(Q[state])
probability = np.zeros(nA)
probability[a_max] = 1 - epsilon
probability = probability + epsilon / nA
return probability
def generate_episode(env, Q, epsilon):
episode = []
state = env.reset()
while True:
action_probabilities = epsilon_greedy(state, Q,epsilon, env.action_space.n)
action = np.random.choice(env.action_space.n, 1, p=action_probabilities)[0]
# print(action)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
epsilon_init = 1.0
for i_episode in range(1, num_episodes + 1):
# monitor progress
epsilon = epsilon_init / i_episode
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env, Q, epsilon)
visit = defaultdict(lambda: np.zeros(env.action_space.n))
for i, (state, action, reward) in enumerate(episode):
if visit[state][action] == 0:
visit[state][action] = 1
episode_reward = 0
for j in range(len(episode) - i):
episode_reward = episode_reward + (gamma ** j) * episode[i + j][2]
Q[state][action] = Q[state][action] + alpha * (episode_reward - Q[state][action])
policy = defaultdict(lambda: 0)
for k, v in Q.items():
policy[k] = np.argmax(v)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
print(action)
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(6, 9, False)
0
End game! Reward: 1.0
You won :)
(12, 9, False)
1
End game! Reward: -1.0
You lost :(
(12, 4, False)
1
(17, 4, False)
0
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((13, 5, False), 1, -1.0)]
[((20, 10, False), 0, 1.0)]
[((17, 10, False), 1, 0.0), ((20, 10, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
episode_len = len(episode)
for i, (state, action, reward) in enumerate(episode):
rewards = np.array([turn_data[2] for turn_data in episode[i:]]) * np.array([gamma**i for i in range(episode_len - i)])
returns_sum[state][action] += np.sum(rewards)
N[state][action] += 1
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def get_eps_action(greedy_policy_dict, state, epsilon):
probs = [1 - epsilon + epsilon / 2, epsilon / 2]
actions = [greedy_policy_dict[state], 1 - greedy_policy_dict[state]]
action = np.random.choice(actions, p=probs)
return action
def generate_episode(bj_env, greedy_policy_dict, epsilon):
episode = []
state = bj_env.reset()
while True:
action = get_eps_action(greedy_policy_dict, state, epsilon)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def evaluate_policy(policy, Q, N, episode, alpha, gamma):
episode_len = len(episode)
for i, (state, action, reward) in enumerate(episode):
rewards = np.array([turn_data[2] for turn_data in episode[i:]]) * np.array([gamma**i for i in range(episode_len - i)])
N[state][action] += 1
Q[state][action] = Q[state][action] + 1 / N[state][action] * (np.sum(rewards) - Q[state][action])
return Q
def improve_policy(policy):
policy = {}
for state in Q:
policy[state] = 0 if Q[state][0] > Q[state][1] else 1
return policy
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.33):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
N = defaultdict(lambda: np.zeros(env.action_space.n))
policy = defaultdict(lambda: 0)
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env, policy, epsilon)
Q = evaluate_policy(policy, Q, N, episode, alpha, gamma)
policy = improve_policy(policy)
epsilon = max(epsilon*eps_decay, eps_min)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 5, False)
End game! Reward: -1.0
You lost :(
(20, 3, False)
End game! Reward: 1.0
You won :)
(13, 8, False)
(18, 8, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
#randomly select from 0 and 1
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((14, 3, False), 1, -1.0)]
[((4, 4, False), 1, 0.0), ((13, 4, False), 1, -1.0)]
[((13, 9, False), 1, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
#generate the episode under the policy
episode = generate_episode(env)
#unzip the reward, state, and actions
states, actions, reward = zip(*episode)
#Prepare for discounts in each step
discounts = np.array([gamma ** i for i in range(len(reward) + 1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(reward[i:] * discounts[:-(1+i)])
N[state][actions[i]] += 1
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
#randomly select from 0 and 1
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_prob(Q, epsilon, nA):
"""Obtain the epsilon-greedy policy"""
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
states, actions, reward = zip(*episode)
discounts = np.array([gamma ** i for i in range(len(reward) + 1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha * (sum(reward[i:] * discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon = 0.99, epsilon_decay = 0.9999, epsilon_min = 0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(epsilon * epsilon_decay, epsilon_min)
episode = generate_episode_from_Q(env, Q, epsilon, nA)
Q = update_Q(env, episode, Q, alpha, gamma)
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.05)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(5):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(18, 10, False)
End game! Reward: 1.0
You won :)
(20, 10, False)
End game! Reward: 1.0
You won :)
(21, 6, True)
(18, 6, False)
End game! Reward: -1.0
You lost :(
(4, 9, False)
(13, 9, False)
(18, 9, False)
(19, 9, False)
End game! Reward: -1.0
You lost :(
(19, 6, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((15, 4, False), 1, -1.0)]
[((11, 10, False), 0, -1.0)]
[((13, 6, False), 1, 0.0), ((19, 6, False), 0, 0.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(epsilon*eps_decay, eps_min)
episode = generate_episode_from_Q(env, Q, epsilon, nA)
Q = update_Q(env, episode, Q, alpha, gamma)
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(21, 3, True)
(17, 3, False)
End game! Reward: -1.0
You lost :(
(21, 1, True)
(14, 1, False)
(19, 1, False)
End game! Reward: 1.0
You won :)
(18, 4, False)
(19, 4, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(10):
print(generate_episode_from_limit_stochastic(env))
###Output
[((13, 10, False), 1, 0.0), ((18, 10, False), 1, -1.0)]
[((14, 10, False), 1, 0.0), ((16, 10, False), 1, -1.0)]
[((15, 9, False), 1, -1.0)]
[((20, 10, False), 0, 1.0)]
[((19, 1, False), 1, -1.0)]
[((13, 2, False), 1, 0.0), ((14, 2, False), 0, 1.0)]
[((16, 5, False), 1, -1.0)]
[((16, 1, False), 1, -1.0)]
[((8, 7, False), 1, 0.0), ((18, 7, False), 0, 1.0)]
[((6, 9, False), 1, 0.0), ((12, 9, False), 1, 0.0), ((13, 9, False), 1, 0.0), ((19, 9, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## My work
episode = generate_episode(env)
_, _, rewards = zip(*episode)
is_first_visit = defaultdict(lambda: np.ones(env.action_space.n))
for i, (state, action, _) in enumerate(episode):
if is_first_visit[state][action]: # First-visit monte-carlo
is_first_visit[state][action] = 0 # Mark as visited
N[state][action] += 1
ret = sum([reward * (gamma**k) for k, reward in enumerate(rewards[i:])])
returns_sum[state][action] += ret
# Get Q
for state in N:
for action in range(env.action_space.n):
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
class EpsilonGreedyPolicy():
def __init__(self, Q, action_space, epsilon):
self.Q = Q # Action-value function
self.actions = action_space
self.epsilon = epsilon
def get_action(self, state):
greedy_choice = np.argmax(self.Q[state])
random_choice = np.random.choice(self.actions)
epsilon_greedy_choice = np.random.choice(
[greedy_choice, random_choice],
p = [1-self.epsilon, self.epsilon]
)
return epsilon_greedy_choice
def generate_episode(env, policy):
episode = []
state = env.reset()
while True:
action = policy.get_action(state)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# Update epsilon
# epsilon = 1 / i_episode
epsilon = 1 - 0.9 * (i_episode / num_episodes) # 1 -> 0.1
# Policy Pi <- EpsilonGreedy(Q)
policy_func = EpsilonGreedyPolicy(Q, range(nA), epsilon)
# Sample episode using Policy
episode = generate_episode(env, policy_func)
# Update Q with episode weighted on alpha
_, _, rewards = zip(*episode)
for i, (state, action, _) in enumerate(episode):
ret = sum([(gamma ** k) * reward for k, reward in enumerate(rewards[i:])])
Q[state][action] = (1 - alpha) * Q[state][action] + alpha * ret
# Freeze the "optimal" policy to a deterministic policy
policy = defaultdict(lambda: np.random.choice(range(nA)))
for state in Q:
policy[state] = np.argmax(Q[state])
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, alpha=0.02, gamma=1.0)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
/Users/ehu/Projects/gym/gym/__init__.py:22: UserWarning: DEPRECATION WARNING: to improve load times, gym no longer automatically loads gym.spaces. Please run "import gym.spaces" to load gym.spaces on your own. This warning will turn into an error in a future version of gym.
warnings.warn('DEPRECATION WARNING: to improve load times, gym no longer automatically loads gym.spaces. Please run "import gym.spaces" to load gym.spaces on your own. This warning will turn into an error in a future version of gym.')
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(17, 8, False)
End game! Reward: -1.0
You lost :(
(13, 5, False)
(18, 5, False)
(21, 5, False)
End game! Reward: 0.0
You lost :(
(11, 10, False)
(16, 10, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((13, 4, True), 1, 0), ((13, 4, False), 1, -1)]
[((16, 4, False), 1, -1)]
[((11, 10, False), 1, 0), ((16, 10, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma ** i for i in range(len(rewards) + 1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 1, False)
End game! Reward: -1
You lost :(
(13, 8, False)
End game! Reward: 1.0
You won :)
(9, 10, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((16, 4, False), 1, -1)]
[((16, 3, False), 0, -1.0)]
[((5, 9, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n)) #stores the reward sums for each state-action encountered
N = defaultdict(lambda: np.zeros(env.action_space.n)) #tracks the nubmer of times a specific state-action has been encountered.
Q = defaultdict(lambda: np.zeros(env.action_space.n)) #return value. For each state-action, returns average reward.
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# generate an episode
episode_list = generate_episode(env) #list of (state, action, reward)
states, actions, rewards = zip(*episode_list)
#print("states: {}".format(states))
#print("rewards: {}".format(rewards))
#iteragte through states encountered
for i, state in enumerate(states):
this_action = actions[i]
this_reward = rewards[i]
N[state][this_action] += 1.0 #update number of times this state-action has been encountered
#get rewards from i
this_rewards_sum = 0
for ii in range(i, len(states)):
this_rewards_sum = this_rewards_sum + (rewards[ii] * gamma**(ii-i))
returns_sum[state][this_action] += this_rewards_sum
#print("returns_sum[state][this_action]: {}".format(returns_sum[state][this_action]))
#print("N[state][this_action]: {}".format(N[state][this_action]))
Q[state][this_action] = returns_sum[state][this_action] / N[state][this_action]
#print("Q[state][this_action]: {}".format(Q[state][this_action]))
#print("")
return Q
Q = mc_prediction_q(env, 10, generate_episode_from_limit_stochastic, gamma=.9)
for key in Q.keys():
print("{}: {}".format(key, Q[key]))
###Output
(10, 1, False): [ 0. -0.81]
(13, 1, False): [ 0. -0.9]
(14, 1, False): [-1. 0.]
(15, 10, False): [ 0. -0.9]
(17, 10, False): [ 0. -1.]
(16, 9, True): [0. 0.9]
(21, 9, True): [1. 0.]
(9, 10, False): [ 0. -0.9]
(19, 10, False): [ 0. -1.]
(18, 6, False): [ 0. -1.]
(18, 1, False): [ 0. -0.9]
(19, 1, False): [ 0. -1.]
(17, 9, False): [ 0. -0.9]
(18, 9, False): [-1. 0.]
(18, 10, False): [-1. 0.]
(5, 8, False): [ 0. -0.81]
(10, 8, False): [ 0. -0.9]
(18, 8, False): [ 0. -1.]
(20, 2, False): [1. 0.]
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
import random
def generate_episode_for_mc_control(bj_env, Q, epsilon):
episode = []
state = bj_env.reset()
while True:
#get estimated action values from Q table
action_values = Q[state]
#print("action_values at state {}: {}".format(state, action_values))
#calculate an action
rand = random.uniform(0, 1)
if rand < epsilon:
action = np.random.choice(np.arange(2)) #action = np.random.choice(np.arange(2), p=probs)
else:
action = np.argmax(action_values)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha=.2, gamma=0.95):
num_actions = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(num_actions))
# loop over episodes
for i_episode in range(1, num_episodes+1):
#decrease epsilon as episodes continue
epsilon = max(0.2, 1.0*(1-(i_episode/num_episodes)))
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}. epsilon: {}".format(i_episode, num_episodes, epsilon), end="")
sys.stdout.flush()
## TODO: complete the function
# generate an episode
episode_list = generate_episode_for_mc_control(env, Q, epsilon) #list of (state, action, reward)
states, actions, rewards = zip(*episode_list)
#print("\nstates: {}".format(states))
#print("actions: {}".format(actions))
#print("rewards: {}".format(rewards))
#iteragte through states encountered
for i, state in enumerate(states):
this_action = actions[i]
this_reward = rewards[i]
#get rewards from i
this_rewards_sum_g = 0
for ii in range(i, len(states)):
this_rewards_sum_g = this_rewards_sum_g + (rewards[ii] * gamma**(ii-i))
#get existing value from Q table for this action (G)
existing_q = Q[state][this_action]
#update Q table using constant-alpha equation
Q[state][this_action] = (1-alpha)*existing_q + alpha*this_rewards_sum_g
#print("Q[state][this_action]: {}".format(Q[state][this_action]))
#create policy table
policy = {}
for state in Q.keys():
policy[state] = np.argmax(Q[state])
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 2000000, alpha=.01)
# keys = sorted(policy.keys())
# for state in keys:
# print("state:{}, action: {}".format(state, policy[state]) )
###Output
Episode 2000000/2000000. epsilon: 0.20059999999999996
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(14, 10, False)
End game! Reward: -1.0
You lost :(
(16, 2, False)
End game! Reward: -1.0
You lost :(
(19, 2, True)
(17, 2, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
env.reset()
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((20, 6, False), 0, 1.0)]
[((13, 1, False), 0, -1.0)]
[((9, 9, False), 1, 0.0), ((17, 9, False), 1, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discount = np.array([gamma**i for i in range(len(states)+1)])
for i ,state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discount[:-(i+1)])
N[state][actions[i]] += 1
Q[state][actions[i]] = returns_sum[state][actions[i]]/N[state][actions[i]]
return Q
# print(defaultdict(lambda: np.zeros(env.action_space.n)))
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode(env, Q, epsilon):
episode = []
state = env.reset()
while True:
# action selection based on updated state-action pair values
# 1. if state is in Q
# 2. State is new (i.e. not in Q)
action = np.random.choice(np.arange(env.action_space.n),
p=get_probability(Q[state], epsilon, env.action_space.n))\
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append([state, action, reward])
state = next_state
if done:
break
return episode
def get_probability(Q_s, epsilon, nA):
p = np.ones(nA) * epsilon / nA
best_action = np.argmax(Q_s)
p[best_action] += 1 - epsilon
return p
def update_Q(env, episode, Q, alpha, gamma):
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, s in enumerate(states):
old_q = Q[s][actions[i]]
Q[s][actions[i]] = old_q + alpha * (sum(rewards[i:] * discounts[:-(i+1)]) - old_q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon_max = 1, epsilon_decay = 0.99999, epsilon_min = 0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = epsilon_max
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(epsilon*epsilon_decay, epsilon_min)
episode = generate_episode(env, Q, epsilon)
Q = update_Q(env, episode, Q, alpha, gamma)
policy = {s : np.argmax(a) for s, a in Q.items()}
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
# state = env.reset()
# for i in range(10):
# print("Initial state:", state)
# a = env.action_space.sample()
# print(env.player, env.dealer)
# print("Action:", a)
# s,r,d,i = env.step(a)
# print(env.player, env.dealer)
# print("State:", s)
# print("Reward:", r)
# print("Done:", d)
# print("Info:", i)
# if(d):
# state = env.reset()
# print("\n")
env.reset()
p, p_prime = 0, 0
for i in range(100000):
x = np.random.choice(np.arange(2), 1, p=[0.3,0.7])
if(x == 1):#p_prime
p_prime += 1
else:
p += 1
print(p_prime/(i + 1))
###Output
0.70019
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((21, 10, True), 0, 1.0)]
[((13, 10, False), 1, -1.0)]
[((19, 5, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env) #generate an episode
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
# If agent makes action 0, then game terminates because is its final decision before the dealer starts to play
for i_episode in range(10):
state = env.reset() #resets the episode and returns the initial state
while True:
print(state)
action = env.action_space.sample() #gets a random sample from the action space
state, reward, done, info = env.step(action) #executes the action and returns the reward and the next state
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(15, 10, True)
(14, 10, False)
End game! Reward: -1.0
You lost :(
(13, 4, False)
End game! Reward: 1.0
You won :)
(12, 7, False)
End game! Reward: -1.0
You lost :(
(19, 3, False)
End game! Reward: -1.0
You lost :(
(14, 7, False)
End game! Reward: 1.0
You won :)
(5, 10, False)
End game! Reward: -1.0
You lost :(
(13, 1, False)
End game! Reward: 1.0
You won :)
(20, 10, False)
End game! Reward: -1
You lost :(
(15, 10, False)
(19, 10, False)
End game! Reward: -1.0
You lost :(
(15, 10, True)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(10):
print(generate_episode_from_limit_stochastic(env))
###Output
[((20, 2, False), 0, -1.0)]
[((15, 2, False), 1, -1)]
[((17, 6, False), 1, 0), ((21, 6, False), 0, 0.0)]
[((21, 5, True), 0, 1.0)]
[((19, 9, False), 0, 0.0)]
[((8, 1, False), 0, -1.0)]
[((9, 10, False), 1, 0), ((14, 10, False), 1, -1)]
[((19, 8, False), 0, 1.0)]
[((4, 10, False), 1, 0), ((14, 10, False), 0, -1.0)]
[((11, 6, False), 1, 0), ((12, 6, False), 1, 0), ((19, 6, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
visited_states = [] #make a list of visited states for a First-Visit MC implementation
for i_visit,visit in enumerate(episode):
state, action, reward = visit
if state not in visited_states:
visited_states.append(state)
N[state][action] = N[state][action] + 1
# get the reward after first visiting the current state
remaining_episode = episode[i_visit:]
sum_of_reward = sum([visit_reward[2] for visit_reward in remaining_episode])
returns_sum[state][action] = returns_sum[state][action] + sum_of_reward
# Get Q-value
Q[state][action] = returns_sum[state][action]/N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode(env, policy):
episode = []
state = env.reset() # This loads the initial state
while True:
#probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
#action = np.random.choice(np.arange(2), p=probs)
action = policy[state]
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, generate_episode, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# initialize default policy (Always hits)
policy = defaultdict(lambda: 1)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# epsilon
epsilon = max((num_episodes - i_episode)/num_episodes, 0.05)
# Save greedy policy
if len(list(Q.keys())) > 0:
for state in list(Q.keys()):
policy[state] = np.where(Q[state]==Q[state].max())[0].max() # for each state, gets the action that maximizes return
# Create not greedy policy (random action in states)
not_greedy = defaultdict(lambda: np.random.choice(np.arange(2), p=[0.5, 0.5]))
# Decide between greedy or not greedy with epsilon criteruim
if np.random.choice(np.arange(2), p=[1-epsilon, epsilon]) == 1:
episode = generate_episode(env, not_greedy)
else:
episode = generate_episode(env, policy)
visited_states = [] #make a list of visited states for a First-Visit MC implementation
for i_visit,visit in enumerate(episode):
state, action, reward = visit
if state not in visited_states:
visited_states.append(state)
# get the reward after first visiting the current state
remaining_episode = episode[i_visit:]
sum_of_reward = sum([visit_reward[2] for visit_reward in remaining_episode])
# Get Q-value
Q[state][action] = Q[state][action] + alpha*(sum_of_reward - Q[state][action])
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 5000, generate_episode, 0.02)
###Output
Episode 5000/5000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
policy
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
flag = defaultdict(lambda: np.zeros(env.action_space.n))
list_episode = generate_episode(env)
T = len(list_episode)
Gt = [0]*T
Gt[T-1] = list_episode[T-1][2]
for t in range(1, T):
Rt1 = list_episode[T-1-t][2]
Gt[T-1-t] = Rt1 + gamma*Gt[T-t]
for t in range(len(list_episode)):
St = list_episode[t][0]
At = list_episode[t][1]
Rt = list_episode[t][2]
if flag[St][At] == 0: # first visit policy
N[St][At] = N[St][At]+1
returns_sum[St][At] = returns_sum[St][At] + Gt[t]
flag[St][At] = 1
for s in N:
for a in range(env.action_space.n):
Q[s][a] = returns_sum[s][a]/N[s][a]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def get_probs(Q_s, epsilon, nA):
"""obtain the action probabilities corresponding to epsilon-greedy policy"""
action_prob = np.ones(nA)*epsilon / nA
best_a = np.argmax(Q_s)
action_prob[best_a] = 1 - epsilon + (epsilon/nA)
return action_prob
def generate_episode_from_Q(env, Q, epsilon, nA):
"""generate an episode from following the epsilon-greedy policy"""
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arrange(nA), p=get_probs(Q[state], epsilon, nA)) if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(env, episode, Q, alpha, gamma):
"""update the action-value function estimate using the most recent episode"""
states, actions, rewards = zip(*episode)
# prepare for discounting (Gt)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
# G = sum(rewards[i:]*discounts[:-(i+1)])
Q[state][actions[i]] = old_Q + (alpha)*(sum(rewards[i:]*discount[:-(i+1)]) - old_Q)
return Q
# def mc_control(env, num_episodes, alpha, gamma=1.0):
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, decay_rate = 0.999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(epsilon * decay_rate, eps_min)
# genrate an episode using Q
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update Q with the episode
Q = update_Q(env, episode, Q, alpha, gamma)
policy = dict((k, np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 50000, 0.02)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from blackjack import BlackjackEnv
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = BlackjackEnv()
# env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
sum_rewards = 0
for i_episode in range(3):
state = env.reset() # get the initital state: sum of your cards, dealers shown carde, 'usable' ace
# print(state)
while True: # cycle as long as you are not over 21
print('Episode: {}'.format(i_episode))
# print(state)
# action is 0 or 1 = randomly choosen from the action_space which is 0 or 1
action = env.action_space.sample()
# print('--action--', action)
# take the action: 0 or 1 and get the new state, the reward, done status and info
state, reward, done, info = env.step(action)
if done:
print('final STATE: ', state)
print('End game! Reward: ', reward)
sum_rewards += reward
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
print('sum of rewards {}'.format(sum_rewards))
###Output
Episode: 0
final STATE: (16, 8, False)
End game! Reward: -1.0
You lost :(
Episode: 1
final STATE: (11, 1, False)
End game! Reward: -1.0
You lost :(
Episode: 2
Episode: 2
final STATE: (13, 1, False)
End game! Reward: 1.0
You won :)
sum of rewards -1.0
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset() # start the process and get the initial observation / state!
while True:
# set the probability values depending on the values of state[0] = the players current sum {0, 1, ..., 31}
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
# calculate the action 0 = stick or 1 = hit by using the probability probs
action = np.random.choice(np.arange(2), p=probs)
# call the step function with the calculated action.
# This returns the observation / state, the reword, done and info
next_state, reward, done, info = bj_env.step(action)
# safe the information
episode.append((state, action, reward))
state = next_state
# exit the while loop if the episode has been ended.
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(10):
print(generate_episode_from_limit_stochastic(env))
###Output
[((13, 4, True), 0, 1.0)]
[((14, 4, False), 1, -1)]
[((13, 3, False), 1, -1)]
[((7, 10, False), 1, 0), ((18, 10, True), 1, 0), ((18, 10, False), 0, -1.0)]
[((15, 4, False), 0, -1.0)]
[((7, 6, False), 0, 1.0)]
[((18, 2, False), 1, -1)]
[((14, 8, False), 0, -1.0)]
[((20, 10, False), 0, -1.0)]
[((8, 10, False), 1, 0), ((18, 10, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`. Hints on dictionaries and defaultdictWhen the values in a dictionary are collections (lists, dicts, etc.), the value (an empty list or dict) must be initialized the first time a given key is used. While this is relatively easy to do manually, the `defaultdict` type automates and simplifies these kinds of operations.A defaultdict works exactly like a normal dict, but it is initialized with a function (“default factory”) that takes no arguments and provides the default value for a nonexistent key.In the example below `lambda: np.zeros(env.action_space.n)` is the function that provides the default value for a nonexistent key! The value is a numpy array of size two with default values 0!
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries
# the states are the keys and arrays of the size of the action space hold the values
# in this example the arrays have length 2 and the position of the array (0 or 1) indicates the action
# position 0 represents action 0 and position 1 indicates action 1
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate episode
episode = generate_episode(env)
# print(episode)
# obtain the states, actions and rewards by using the zip() function
states, actions, rewards = zip(*episode)
# print(states, actions, rewards)
# accumulate the values for N, returns_sum and Q over all episodes
# this is the every-visit MC prediction!
for i, state in enumerate(states):
# update state action paire -> update position 0 for action 0 and position 1 for action 1!
# actions[i] is either 0 or 1
N[state][actions[i]] += 1
# update thge reward for each state-action-pair
returns_sum[state][actions[i]] += sum(rewards)
# update the Q-table by accumulating over the dictionaries N and return_sum
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
# print(returns_sum)
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_policy(env, policy):
episode = []
state = env.reset()
while True:
# choose the action related to the state if it exists
if state in policy:
action = policy[state]
# if the state-action pare is not within the policy choose a random action (equaly likely)
else:
action = env.action_space.sample()
# do the next step
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
# use the Q-table for genarating the episode
def generate_episode_from_Q_table(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
# choose the action related to the state if it exists
if state in Q:
prob = calculate_probability_distribution(Q[state], epsilon, nA)
action = np.random.choice(np.arange(nA), p = prob)
# if the state-action pare is not within the Q-table choose a random action (equaly likely)
else:
action = env.action_space.sample()
# do the next step
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def calculate_probability_distribution(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
prob = np.ones(nA) * (epsilon / nA)
best_a = np.argmax(Q_s)
prob[best_a] = 1 - epsilon + (epsilon / nA)
return prob
def update_Q_table(episode, alpha):
# obtain the states, actions and rewards by using the zip() function
states, actions, rewards = zip(*episode)
# update the Q-table
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# init epsilon
epsilon = eps_start
# init policy
policy = defaultdict(lambda: 0)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# calculate epsilon
epsilon = max(epsilon * eps_decay, eps_min)
# epsilon = (1 / i_episode) if epsilon > 0.1 else epsilon
# # set the probability for the epsilon-greedy policy
# probs = [1-epsilon, epsilon]
# # calculate the choice
# choice = np.random.choice(np.arange(nA), p=probs)
# # update policy
# for key in Q:
# if choice == 0:
# policy[key] = np.argmax(Q[key])
# else:
# policy[key] = env.action_space.sample()
# # generate episode from policy
# episode = generate_episode_from_policy(env, policy)
# generate episode from Q
episode = generate_episode_from_Q_table(env, Q, epsilon, nA)
# update the Q-table (state-action pairs)
Q = update_Q_table(episode, alpha)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k, np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.002)
# policy, Q = mc_control(env, ?, ?)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
print(action)
state, reward, done, info = env.step(action)
print(state)
print(reward)
print(done)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(8, 10, False)
1
(10, 10, False)
0
False
(10, 10, False)
1
(16, 10, False)
0
False
(16, 10, False)
1
(24, 10, False)
-1
True
End game! Reward: -1
You lost :(
(16, 4, False)
1
(20, 4, False)
0
False
(20, 4, False)
0
(20, 4, False)
1.0
True
End game! Reward: 1.0
You won :)
(14, 10, False)
1
(24, 10, False)
-1
True
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((19, 10, True), 0, -1.0)]
[((17, 10, False), 1, -1)]
[((14, 10, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(f"Observation space: \t{env.observation_space}")
print(f"Action space: \t\t{env.action_space}")
###Output
Observation space: Tuple(Discrete(32), Discrete(11), Discrete(2))
Action space: Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(19, 10, False)
End game! Reward: 1.0
You won :)
(14, 6, False)
(15, 6, False)
End game! Reward: 1.0
You won :)
(16, 3, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(5):
print(generate_episode_from_limit_stochastic(env))
###Output
[((18, 2, True), 0, 1.0)]
[((16, 5, False), 1, 0.0), ((18, 5, False), 1, -1.0)]
[((13, 5, False), 1, 0.0), ((17, 5, False), 1, -1.0)]
[((14, 4, False), 1, 0.0), ((17, 4, False), 1, -1.0)]
[((20, 10, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
R = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
n = len(episode)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(n+1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:] * discounts[:-(i+1)])
N[state][actions[i]] += 1
# comnpute Q table
for state in returns_sum.keys():
for action in range(env.action_space.n):
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q, returns_sum, N
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q, R, N = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, n):
""" generates an episode following the epsilon-greedy policy"""
episode = []
state = env.reset()
while True:
if state in Q:
action = np.random.choice(np.arange(n), p=get_props(Q[state], epsilon, n))
else:
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_props(Q_s, epsilon, n):
policy_s = np.ones(n) * epsilon / n
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / n)
return policy_s
def update_Q(episode, Q, alpha, gamma):
n = len(episode)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(n+1)])
for i, state in enumerate(states):
R = sum(rewards[i:] * discounts[:-(1+i)])
Q[state][actions[i]] = Q[state][actions[i]] + alpha * (R - Q[state][actions[i]])
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epsilon = max(eps_min, epsilon * eps_decay)
episode = generate_episode_from_Q(env, Q, epsilon, nA)
Q = update_Q(episode, Q, alpha, gamma)
policy = dict((s, np.argmax(v)) for s, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(13, 2, True)
(12, 2, False)
End game! Reward: 1.0
You won :)
(18, 6, False)
End game! Reward: -1.0
You lost :(
(18, 1, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((6, 4, False), 1, 0.0), ((16, 4, False), 1, -1.0)]
[((9, 1, False), 1, 0.0), ((15, 1, False), 1, 0.0), ((20, 1, False), 0, -1.0)]
[((17, 7, False), 1, 0.0), ((21, 7, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
counter = defaultdict(lambda: np.zeros(env.action_space.n))
episode = generate_episode(env)
len_ep = len(episode)
return_value = 0.0
for t in range(len_ep):
t_backward = len_ep-1 - t
current_state = episode[t_backward][0]
current_action = episode[t_backward][1]
return_value += episode[t_backward][2]
if counter[current_state][current_action] == 0:
counter[current_state][current_action] += 1
N[current_state][current_action] += 1
Q[current_state][current_action] += return_value
for k, a_value_list in Q.items():
for a in range(len(a_value_list)):
Q[k][a] /= N[k][a]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_given_Q(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
if state not in Q:
action = env.action_space.sample()
else:
probs = np.ones(nA) * epsilon/nA
probs[np.argmax(Q[state])] = 1 - epsilon + epsilon/nA
action = np.random.choice(np.arange(nA), p=probs)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q_given_episode_data(Q, episode_data, alpha, gamma):
episode_len = len(episode_data)
states, actions , rewards = zip(*episode_data)
G_t = 0.0
for t in range(episode_len):
t_backward = episode_len-1 - t
G_t = rewards[t_backward] + gamma*G_t
Q[states[t_backward]][actions[t_backward]] = (1-alpha)*Q[states[t_backward]][actions[t_backward]] + alpha*G_t
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
eps_start=1.0
eps_decay=.99999
eps_min=0.01
epsilon = eps_start
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(epsilon*eps_decay, eps_min)
episode_data = generate_episode_given_Q(env, Q, epsilon, nA)
Q = update_Q_given_episode_data(Q, episode_data, alpha, gamma)
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
num_episodes = 500000
alpha = 0.01
policy, Q = mc_control(env, num_episodes, alpha)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
sys.path.append('../../gym')
import gym
import numpy as np
from collections import defaultdict
import gym.spaces
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
print(env.action_space.n)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
2
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 10, False)
End game! Reward: 0.0
You lost :(
(13, 10, False)
End game! Reward: -1
You lost :(
(17, 1, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((5, 3, False), 1, 0), ((12, 3, False), 1, 0), ((21, 3, False), 0, 0.0)]
[((12, 1, True), 0, -1.0)]
[((16, 8, True), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=0.9):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# generate episode
episode = generate_episode(env)
#Generate state, action, reward
states, actions, rewards = zip(*episode)
# generate discount
G = np.array([gamma ** i for i in range(len(rewards) + 1)])
for t in range(0, len(states) - 1):
S_t = states[t]
A_t = actions[t]
R_t = rewards[t]
returns_sum[S_t][A_t] += sum(rewards[t:] * G[: -(1 + t)])
N[S_t][A_t] += 1.0
Q[S_t][A_t] = returns_sum[S_t][A_t] / N[S_t][A_t]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def greedy_probability(Q_s, epsilon, nA):
policy_s = np.ones(nA) * epsilon / nA
# choose gready action with maximum return
best_action = np.argmax(Q_s)
policy_s[best_action] = 1 - epsilon + (epsilon / nA)
return policy_s
def generate_epsilon_greedy_episode(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
#probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
if state in Q:
probs = greedy_probability(Q[state], epsilon, nA)
action = np.random.choice(np.arange(nA), p=probs)
else:
action = env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# generate discount
G = np.array([gamma ** i for i in range(len(rewards) + 1)])
for t in range(0, len(states)):
S_t = states[t]
A_t = actions[t]
R_t = rewards[t]
#Update Q value
Qsa_old = Q[S_t][A_t]
Q[S_t][A_t] = Qsa_old + alpha * (sum(rewards[t:] * G[: -(1 + t)]) - Qsa_old)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# set the value of epsilon
epsilon = max(epsilon * eps_decay, eps_min)
# generate episode with epsilon greedy policy
episode = generate_epsilon_greedy_episode(env, Q, epsilon, nA)
#Generate state, action, reward
states, actions, rewards = zip(*episode)
# generate discount
G = np.array([gamma ** i for i in range(len(rewards) + 1)])
Q = update_Q(env, episode, Q, alpha, gamma)
# build policy
policy = dict((k, np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
#policy, Q = mc_control(env, 500000, 0.02)
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
env.action_space.n
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
A = defaultdict(lambda: np.zeros(env.action_space.n))
A['wei']
A
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(15, 10, False)
End game! Reward: -1.0
You lost :(
(17, 6, False)
End game! Reward: -1
You lost :(
(13, 10, False)
(20, 10, False)
End game! Reward: 0.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((12, 3, False), 1, 0), ((19, 3, False), 0, 1.0)]
[((20, 8, False), 0, 1.0)]
[((14, 6, False), 1, 0), ((16, 6, False), 1, 0), ((18, 6, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate an episode
episode = generate_episode(env)
# extract states, actions, rewards from the generated episode
states, actions, rewards = zip(*episode)
# create a discount array
discounts = np.array([gamma**i for i in range(len(rewards) + 1)])
for i, state in enumerate(states):
N[state][actions[i]] += 1
returns_sum[state][actions[i]] += sum(rewards[i:] * discounts[:-(i + 1)])
Q[state][actions[i]] = returns_sum[state][actions[i]]/N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def get_probs(Q_s, epsilon, nA):
"""obtains action probabilities corresponding to epsilon-greedy policy"""
policy = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy[best_a] = 1 - epsilon + (epsilon / nA)
return policy
def generate_episode_from_Q(env, Q, epsilon, nA):
"""generates an episode by following epsilon-greedy policy"""
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(env, episode, Q, alpha, gamma):
"""updates the action-value function using the latest episode"""
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha * (sum(rewards[i:] * discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# update epsilon
epsilon = max(epsilon * eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the Q estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.03)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(10):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 9, False)
End game! Reward: -1
You lost :(
(15, 8, False)
End game! Reward: -1
You lost :(
(13, 2, False)
(21, 2, False)
End game! Reward: 1.0
You won :)
(11, 7, False)
(21, 7, False)
End game! Reward: -1
You lost :(
(13, 10, False)
End game! Reward: -1.0
You lost :(
(11, 8, False)
(20, 8, False)
End game! Reward: 1.0
You won :)
(12, 10, False)
End game! Reward: -1
You lost :(
(17, 4, False)
End game! Reward: -1.0
You lost :(
(13, 6, False)
(15, 6, False)
End game! Reward: -1
You lost :(
(19, 10, True)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((17, 9, False), 0, 0.0)]
[((19, 4, False), 0, 1.0)]
[((19, 10, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
## first-visit MC prediction
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
S = set()
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
state_action_pair = (state, actions[i])
action = actions[i]
if state_action_pair not in S:
N[state][action] += 1
# returns_sum[state][action] += sum(rewards[i:]*discounts[:-(1+i)])
# Q[state][action] = returns_sum[state][action]/N[state][action]
G = sum(rewards[i:]*discounts[:-(1+i)]) ## a better way to update Q-table
Q[state][action] = Q[state][action] + (G - Q[state][action]) / N[state][action]
S.add((state, action))
return Q
print(env.observation_space[0])
###Output
Discrete(32)
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 5000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 5000/5000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def gen_epsilon_gredy_episode(bj_env, Q, nA, epsilon=0.05):
'''
implement how to sample based on given Q-table, from which we can infer the optimal policy
'''
episode = []
state = bj_env.reset()
while True:
if state in Q:
probs = get_probs(Q, state, nA, epsilon) ## what if state not in Q? why state might not be in Q?
action = np.random.choice(np.arange(2), p=probs)
else:
action = env.action_space.sample()
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q, state, nA, epsilon):
score = Q[state]
best_action = np.argmax(score)
probs = np.ones(nA) * epsilon / nA
probs[best_action] = 1 - epsilon + (epsilon / nA)
return probs
def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon = 1.0, decay_rate = 0.9999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
N = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
S = set()
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(decay_rate * epsilon, eps_min)
episode = gen_epsilon_gredy_episode(env, Q, nA, epsilon)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
state_action_pair = (state, actions[i])
action = actions[i]
if state_action_pair not in S:
N[state][action] += 1
G = sum(rewards[i:]*discounts[:-(1+i)])
Q[state][action] = Q[state][action] + alpha * (G - Q[state][action])
S.add((state, action))
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 5000000, 0.02)
###Output
Episode 5000000/5000000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(8, 2, False)
End game! Reward: -1.0
You lost :(
(20, 10, False)
End game! Reward: 1.0
You won :)
(12, 7, False)
(17, 7, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((14, 1, False), 1, -1.0)]
[((18, 4, False), 1, 0.0), ((21, 4, False), 0, 1.0)]
[((21, 4, True), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states) :
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
import numpy as np
print(np.argmax(np.array([2, 2, 4, 1, 0])))
###Output
2
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA) :
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA) :
probs = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
probs[best_a] = 1 - epsilon + epsilon / nA
return probs
def update_Q(Q, episode, alpha, gamma) :
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
for i, state in enumerate(states) :
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
Q[state][actions[i]] += alpha * (returns_sum[state][actions[i]] - Q[state][actions[i]])
return Q
def generate_policy_from_Q(env, Q):
return dict((k,np.argmax(v)) for k, v in Q.items())
def mc_control(env, num_episodes, alpha, eps_start=1, eps_decay=0.999999, eps_min= 0.05, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# generate intial stochastic episode
episode = generate_episode_from_limit_stochastic(env)
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode_from_Q(env, Q, epsilon, nA)
Q = update_Q(Q, episode, alpha, gamma)
epsilon = max(epsilon*eps_decay, eps_min)
policy = generate_policy_from_Q(env, Q)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
print(state)
while True:
action = env.action_space.sample()
state, reward, done, info = env.step(action)
print(state)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(7, 3, False)
(7, 3, False)
End game! Reward: -1.0
You lost :(
(13, 10, False)
(16, 10, False)
(16, 10, False)
End game! Reward: 1.0
You won :)
(18, 6, False)
(22, 6, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((20, 10, False), 0, 1.0)]
[((9, 10, False), 1, 0.0), ((13, 10, False), 1, -1.0)]
[((15, 8, False), 1, 0.0), ((16, 8, False), 1, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# Generate an episode S 0 , A 0 , R 1 , . . . , S T using π
episode = generate_episode(env)
# TODO: review how to calculate return again
s_a_list = []
len_episode = len(episode)
for t in range(len_episode):
state_ = episode[t][0]
action_ = episode[t][1]
return_sum_ = sum([episode[i_sum][2] * gamma ** (i_sum - t) for i_sum in range(t, len_episode) ])
# if (S t , A t ) is a first visit (with return G t )
if (state_, action_) not in s_a_list:
N[state_][action_] += 1
returns_sum[state_][action_] += return_sum_
s_a_list.append((state_, action_))
for s in [(x, y, z) for x in range(0, 33) for y in range(0, 11) for z in [True, False]]:
for a in [0, 1]:
Q[s][a] = returns_sum[s][a] / N[s][a]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_one_episonde(bj_env, policy):
episode = []
state = bj_env.reset()
while True:
action = policy[state]
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def e_greedy(Q, n_actions, epsilon=0.1):
# initialize policy
policy = defaultdict(lambda : 0)
probs = [0 for i in range(n_actions)]
for s in Q.keys():
max_a_index = np.argmax(Q[s])
for a_idx in range(Q[s].size):
if a_idx == max_a_index:
probs[a_idx] = 1 - epsilon + epsilon / n_actions
else:
probs[a_idx] = epsilon / n_actions
choose_action = np.random.choice(np.arange(n_actions), p=probs)
policy[s] = choose_action
return policy
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# decaying epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# epsilon greedy policy from Q
policy = e_greedy(Q, nA, epsilon)
episode = generate_one_episonde(env, policy)
s_a_list = []
len_episode = len(episode)
for t in range(len_episode):
state_ = episode[t][0]
action_ = episode[t][1]
returns_sum = sum([episode[i_sum][2] * gamma ** (i_sum - t) for i_sum in range(t, len_episode) ])
# if (S t , A t ) is a first visit (with return G t )
if (state_, action_) not in s_a_list:
Q[state_][action_] += alpha * (returns_sum - Q[state_][action_])
s_a_list.append((state_, action_))
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
# !{sys.executable} -m pip3 install gym
# !{sys.executable} -m pip3 install plot_utils
# !{sys.executable} -m pip install --upgrade matplotlib
import gym
import numpy as np
from collections import defaultdict
# !{sys.executable} -m pip3 install mpl_toolkits
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(17, 10, False)
End game! Reward: 1.0
You won :)
(16, 10, False)
End game! Reward: -1.0
You lost :(
(8, 6, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((13, 2, False), 1, 0), ((14, 2, False), 0, 1.0)]
[((7, 6, False), 0, -1.0)]
[((11, 7, False), 1, 0), ((20, 7, False), 0, 0.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
# get state, action, reward from episode
states, actions, rewards = zip(*episode)
# get discount
discount = np.array([gamma**i for i in range(len(rewards)+1)])
for t in range(len(states)):
N[states[t]][actions[t]] += 1
returns_sum[states[t]][actions[t]] = returns_sum[states[t]][actions[t]] + rewards[t]
Q[states[t]][actions[t]] = returns_sum[states[t]][actions[t]]/N[states[t]][actions[t]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, fancy_e, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], fancy_e, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
# 具体的是现在 gym/envs/toy_text/blackjack.py。开源代码仓库可参考 https://github.com/openai/gym.git
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
# 如果此时玩家的分数已经大于18了,则有 0.8 机率 选择 STICK ,0.2 的机率 选择 HIT
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
# 动作 STICK 为 0,动作 HIT 为 1,通过比率 probs 选择 0和1
action = np.random.choice(np.arange(2), p=probs)
# 得到下一步的状态,返回值,如果游戏结束则跳出循环
next_state, reward, done, info = bj_env.step(action)
# 分数保存在 episode 中
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v1')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(5):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
print(done)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(13, 10, False)
True
End game! Reward: 1.0
You won :)
(14, 7, False)
True
End game! Reward: -1.0
You lost :(
(14, 7, False)
False
(21, 7, False)
True
End game! Reward: 1.0
You won :)
(20, 10, True)
True
End game! Reward: 0.0
You lost :(
(10, 10, False)
False
(21, 10, True)
False
(16, 10, False)
True
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(10):
print(generate_episode_from_limit_stochastic(env))
###Output
[((9, 10, False), 1, 0.0), ((14, 10, False), 0, -1.0)]
[((20, 10, False), 1, -1.0)]
[((21, 10, True), 1, 0.0), ((21, 10, False), 0, 1.0)]
[((13, 6, False), 1, -1.0)]
[((14, 6, True), 1, 0.0), ((14, 6, False), 1, -1.0)]
[((19, 10, True), 0, -1.0)]
[((19, 10, False), 0, 1.0)]
[((19, 3, False), 0, 1.0)]
[((14, 2, False), 1, 0.0), ((16, 2, False), 1, 0.0), ((18, 2, False), 1, 0.0), ((20, 2, False), 1, -1.0)]
[((9, 6, False), 1, 0.0), ((12, 6, False), 1, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
print(gym.__version__)
###Output
/Users/wassim/anaconda/anaconda3/envs/rl/lib/python3.6/site-packages/gym/envs/registration.py:14: PkgResourcesDeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately.
result = entry_point.load(False)
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
wins = 0
num_episodes = 3
for i_episode in range(num_episodes):
state = env.reset()
while True:
print('Initial sum is {}'.format(state[0]))
# action = env.action_space.sample()
# if action == 1:
# print('We will stick')
# else :
# print('We will hit')
state, reward, done, info = env.step(1)
print('\nNew Flip ')
print('Did the action :' )
print('STATE :Player current sum is {}'.format(state[0]))
print('STATE :Dealer face up card is {}'.format(state[1]))
print('STATE :Do i have a faceup card {}'.format(state[2]))
if done:
print('End game! Reward: ', reward)
if reward > 0 :
print('You won :)\n')
wins += 1
else :
print('You lost :(\n')
print('Episode Ended \n' + '='* 30)
break
print(wins)
###Output
Initial sum is 8
New Flip
Did the action :
STATE :Player current sum is 15
STATE :Dealer face up card is 7
STATE :Do i have a faceup card False
Initial sum is 15
New Flip
Did the action :
STATE :Player current sum is 24
STATE :Dealer face up card is 7
STATE :Do i have a faceup card False
End game! Reward: -1
You lost :(
Episode Ended
==============================
Initial sum is 12
New Flip
Did the action :
STATE :Player current sum is 18
STATE :Dealer face up card is 7
STATE :Do i have a faceup card False
Initial sum is 18
New Flip
Did the action :
STATE :Player current sum is 25
STATE :Dealer face up card is 7
STATE :Do i have a faceup card False
End game! Reward: -1
You lost :(
Episode Ended
==============================
Initial sum is 16
New Flip
Did the action :
STATE :Player current sum is 17
STATE :Dealer face up card is 1
STATE :Do i have a faceup card True
Initial sum is 17
New Flip
Did the action :
STATE :Player current sum is 21
STATE :Dealer face up card is 1
STATE :Do i have a faceup card True
Initial sum is 21
New Flip
Did the action :
STATE :Player current sum is 15
STATE :Dealer face up card is 1
STATE :Do i have a faceup card False
Initial sum is 15
New Flip
Did the action :
STATE :Player current sum is 25
STATE :Dealer face up card is 1
STATE :Do i have a faceup card False
End game! Reward: -1
You lost :(
Episode Ended
==============================
0
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(10):
print(generate_episode_from_limit_stochastic(env))
###Output
[((19, 10, False), 0, -1.0)]
[((12, 8, False), 0, -1.0)]
[((15, 9, False), 1, 0), ((18, 9, False), 1, -1)]
[((8, 10, False), 0, -1.0)]
[((20, 3, False), 0, 1.0)]
[((19, 2, True), 0, 1.0)]
[((11, 3, False), 1, 0), ((21, 3, False), 1, -1)]
[((16, 4, False), 0, 1.0)]
[((7, 6, False), 1, 0), ((15, 6, False), 1, 0), ((20, 6, False), 0, 1.0)]
[((12, 6, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
# sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states , actions , rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i , state in enumerate(states):
# This will hold discounted reward from this state-action pair
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q,N
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obt ain the action-value function
Q , N = mc_prediction_q(env, 50000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
V_to_plot
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
[2018-09-23 08:42:34,710] Making new env: Blackjack-v0
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
#player's current sum, 0 to 31
#dealer's face up card, 0 to 10
#if has useable ace 0 or 1
print(env.observation_space)
#hit or stick
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
if reward > 0:
print('You won :)\n')
else:
print('You lost :(\n')
break
###Output
(12, 10, True)
('End game! Reward: ', 1.0)
You won :)
(9, 10, False)
(11, 10, False)
(12, 10, False)
('End game! Reward: ', -1.0)
You lost :(
(7, 10, False)
(14, 10, False)
(21, 10, False)
('End game! Reward: ', -1)
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
#action 1 is hit, 0 is stick
action = np.random.choice(np.arange(2), p=probs)
print(action)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
1
1
1
[((13, 6, False), 1, 0), ((17, 6, False), 1, 0), ((20, 6, False), 1, -1)]
0
[((20, 2, False), 0, 1.0)]
0
[((19, 4, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
prev_N = N
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100000 == 0:
print(i_episode, num_episodes)
# print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
# sys.stdout.flush()
## TODO: complete the function
# run an episode with current policy of stochastic
episode = generate_episode_from_limit_stochastic(env)
#update Q table with first-visit MC prediction
for i in range(len(episode)):
state, action, reward = episode[i]
#if S, A if first visit
if N[state][action] > prev_N[state][action]:
continue
N[state][action] += 1
#cannot be reward alone, but sums of rewards with discounted
returns_sum[state][action] = returns_sum[state][action] + (gamma**i)*reward
prev_N = N
#update Q table
for state in returns_sum:
Q[state] = returns_sum[state]/N[state]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
(100000, 500000)
(200000, 500000)
(300000, 500000)
(400000, 500000)
(500000, 500000)
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_epsilon_greedy(bj_env, Q, epsilon=0.01):
episode = []
state = bj_env.reset()
while True:
seed = np.random.random(1)[0]
if seed < epsilon:
#random
action = np.random.choice(np.arange(2), p=[0.5, 0.5])
else:
#greedy
action = np.amax(Q[state])
action = int(action)
#action 1 is hit, 0 is stick
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
for i in range(3):
print(generate_episode_from_epsilon_greedy(env, Q, 0.5))
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
epsilon = eps_start
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100000 == 0:
print(i_episode, num_episodes)
sys.stdout.flush()
## TODO: complete the function
# generate episode with current policy
epsilon = max(epsilon*eps_decay, eps_min)
episode = generate_episode_from_epsilon_greedy(env, Q, epsilon)
visited = defaultdict(lambda: np.zeros(nA))
# improve policy
# for i in range(len(episode)):
# state, action, reward = episode[i]
# #if first visit
# #if visited[state][action] == 0:
# Q[state][action] = (1-alpha)*Q[state][action] + alpha*reward*(gamma**i)
# # visited[state][action] = 1
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
policy = defaultdict(lambda: np.zeros(1))
for state in Q:
policy[state] = int(np.argmax(Q[state]))
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
(100000, 1000000)
(200000, 1000000)
(300000, 1000000)
(400000, 1000000)
(500000, 1000000)
(600000, 1000000)
(700000, 1000000)
(800000, 1000000)
(900000, 1000000)
(1000000, 1000000)
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
/home/brabeem/anaconda3/lib/python3.7/site-packages/ale_py/roms/utils.py:90: DeprecationWarning: SelectableGroups dict interface is deprecated. Use select.
for external in metadata.entry_points().get(self.group, []):
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v1')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(12, 2, False)
End game! Reward: 1.0
You won :)
(13, 10, False)
End game! Reward: 1.0
You won :)
(11, 3, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
print(np.random.choice(np.arange(2),p=[0.01,0.99]))
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((10, 10, False), 1, 0.0), ((20, 10, False), 1, -1.0)]
[((20, 10, False), 0, 1.0)]
[((17, 7, False), 1, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
print(returns_sum["name"][0])
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))#dictionary of lists
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states,actions,rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards))])
for i,state in enumerate(states):
returns_sum[state][actions[i]] += np.sum(rewards[i:] * discounts[:len(discounts)-i])
N[state][actions[i]] += 1
Q[state][actions[i]] = returns_sum[state][actions[i]]/N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode(Q_s,epsilon,env,nA):
state = env.reset()
episode = []
while True:
action = np.random.choice(np.arange(nA),p=get_probs(Q_s[state],epsilon,nA))
next_state,reward,done,info= env.step(action)
episode.append((state,action,reward))
state = next_state
if done is True:
break
return episode
def get_probs(Q_s,epsilon,nA):
probs = np.ones(nA) * (epsilon/nA)
best_a = np.argmax(Q_s)
probs[best_a] = 1 - epsilon + (epsilon/nA)
return probs
def update_Q(Q,episode,gamma,alpha):
states,actions,rewards= zip(*episode)
discount = np.array([gamma**i for i in range(len(rewards))])
for j,state in enumerate(states):
g = np.sum(rewards[j:] * discount[:len(rewards)-j])
Q[state][actions[j]] = Q[state][actions[j]] + alpha * (g - Q[state][actions[j]])
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0,epsilon_start=1,epsilon_decay=.99999,epsilon_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
epsilon = epsilon_start
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(epsilon*epsilon_decay ,epsilon_min)
episode = generate_episode(Q,epsilon=epsilon,env=env,nA=nA)
Q = update_Q(Q,episode,gamma,alpha)
policy = dict((k,np.argmax(acts)) for k,acts in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(21, 9, True)
End game! Reward: 1.0
You won :)
(21, 10, True)
(15, 10, False)
(16, 10, False)
End game! Reward: -1.0
You lost :(
(20, 8, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
episode = generate_episode_from_limit_stochastic(env)
s, a, r = zip(*episode)
discounts = np.array([0.8**i for i in range(len(r)+1)])
discounts, r
0.8**1
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
s, a, r = zip(*episode)
discounts = np.array([gamma**i for i in range(len(r)+1)])
for (i, s) in enumerate(s):
returns_sum[s][a[i]] += sum(r[i:]*discounts[:-(1+i)])
N[s][a[i]] += 1.0
Q[s][a[i]] = returns_sum[s][a[i]] / N[s][a[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
Q[(12, 10, False)][0]
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(14, 7, True)
End game! Reward: -1.0
You lost :(
(20, 4, False)
End game! Reward: -1
You lost :(
(14, 9, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(*generate_episode_from_limit_stochastic(env))
###Output
((18, 5, True), 1, 0) ((18, 5, False), 0, 1.0)
((17, 10, False), 0, 1.0)
((12, 1, False), 1, 0) ((13, 1, False), 1, 0) ((19, 1, False), 0, -1.0)
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode_from_limit_stochastic(env)
states, actions, rewards = zip(*episode)
for i, state in enumerate(states):
N[state][actions[i]] = N[state][actions[i]]+1
returns_sum[state][actions[i]] += sum(rewards[i:])
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 50000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 50000/50000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
# print(env.action_space.sample())
next_state, reward, done, info = env.step(action)
# print(env.step(action))
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
# print(discounts[:-(i+i)])
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 50000, 0.1)
###Output
Episode 50000/50000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print("Final action: ", action)
print("Final state: ", state)
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# Q = mc_prediction_q(env, 5, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
from scipy.stats import bernoulli
import numpy as np
epsilon_threshold = 0.1
def epsilon_greedy(Q, epsilon, env):
policy = {}
for state in Q.keys():
if bernoulli.rvs(1 - epsilon): # Exploit. Take the greedy action
policy[state] = np.argmax(Q[state])
else: # Explore. Sample uniformly from all actions
policy[state] = env.action_space.sample()
return policy
def generate_episode_for_policy(policy, bj_env):
episode = []
state = bj_env.reset()
while True:
action = policy.get(state, env.action_space.sample())
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(env, episode, Q, alpha, gamma):
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
cumulative_reward = sum(rewards[i:] * discounts[:-(1+i)])
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha * (cumulative_reward - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# Decay linearly
epsilon = max((num_episodes - i_episode - 1) / num_episodes, epsilon_threshold)
policy = epsilon_greedy(Q, epsilon, env)
episode = generate_episode_for_policy(policy, env)
Q = update_Q(env, episode, Q, alpha, gamma)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
print(action)
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(15, 5, True)
1
(21, 5, True)
0
End game! Reward: 1.0
You won :)
(13, 1, False)
1
(19, 1, False)
1
End game! Reward: -1
You lost :(
(12, 10, False)
0
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
episode = generate_episode_from_limit_stochastic(env)
print("Episode:", episode)
states, actions, rewards = zip(*episode)
print("states:",states,"actions:",actions,"rewards:",rewards)
###Output
Episode: [((13, 10, False), 1, -1)]
states: ((13, 10, False),) actions: (1,) rewards: (-1,)
Episode: [((13, 5, False), 1, 0), ((21, 5, False), 1, -1)]
states: ((13, 5, False), (21, 5, False)) actions: (1, 1) rewards: (0, -1)
Episode: [((14, 10, False), 0, -1.0)]
states: ((14, 10, False),) actions: (0,) rewards: (-1.0,)
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(i+1)])
N[state][actions[i]]+=1
Q[state][actions[i]] = returns_sum[state][actions[i]]/N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
#np.random.seed(0)
## TODO: Define necessary functions
class Policy:
def __init__(self, Q, epsilon, nA):
self.Q = Q
self.eps = epsilon
self.num_actions = nA
def sample_action(self, state):
if state in Q:
best_action = np.argmax(self.Q[state])
if np.random.uniform()>self.eps:
return best_action
return np.random.choice(np.arange(self.num_actions))
def gen_episode(env, policy):
episode=[]
state = env.reset()
while True:
action = policy.sample_action(state)
state, reward, done, info = env.step(action)
episode.append((state, action, reward))
if done:
break
return episode
def update_Q(Q, episode, alpha, gamma):
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_q = Q[state][actions[i]]
ret = sum(rewards[i:]*discounts[:-(i+1)])
Q[state][actions[i]] = old_q + alpha * (ret-old_q)
return Q
## TODO: complete the function
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_decay=.99999):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# start epsilon, final_epsilon
epsilon, eps_min = 1.0, 0.0
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epsilon = max(epsilon, eps_min)
policy = Policy(Q, epsilon, nA)
episode = gen_episode(env, policy)
Q = update_Q(Q, episode, alpha, gamma)
epsilon = epsilon * eps_decay
policy = dict((state, np.argmax(values)) for state, values in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print('State:', state)
action = env.action_space.sample()
print('Action:', ['HIT' if a == 1 else 'STICK' for a in [action]][0])
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
State: (16, 10, False)
Action: STICK
End game! Reward: -1.0
You lost :(
State: (20, 7, False)
Action: HIT
End game! Reward: -1
You lost :(
State: (13, 10, False)
Action: HIT
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((13, 2, False), 0, 1.0)]
[((20, 4, False), 0, 1.0)]
[((9, 10, False), 1, 0), ((19, 10, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
states, actions, rewards = zip(*generate_episode(env))
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
N[state][actions[i]] += 1
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(i+1)])
Q[state][actions[i]] = returns_sum[state][actions[i]]/N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic) #500000
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epsilon = max(epsilon*eps_decay, eps_min)
episode = generate_episode_from_Q(env, Q, epsilon, nA)
Q = update_Q(env, episode, Q, alpha, gamma)
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(11, 9, False)
(21, 9, False)
End game! Reward: 1.0
You won :)
(20, 6, False)
End game! Reward: -1
You lost :(
(16, 10, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((19, 10, True), 0, 1.0)]
[((21, 4, True), 0, 1.0)]
[((15, 3, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discount = np.array([gamma**i for i in range(len(rewards)+1)])
for i in range(len(states)):
returns_sum[states[i]][actions[i]] += sum(rewards[i:]*discount[:-(1+i)])
N[states[i]][actions[i]] += 1.0
Q[states[i]][actions[i]] = returns_sum[states[i]][actions[i]] / N[states[i]][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
print(Q)
###Output
defaultdict(<function mc_prediction_q.<locals>.<lambda> at 0x000001F105D896A8>, {(15, 10, False): array([-0.57037844, -0.65142617]), (18, 10, False): array([-0.2411813 , -0.73465183]), (13, 10, False): array([-0.58646836, -0.59391745]), (20, 10, False): array([ 0.4440851, -0.8860992]), (19, 10, False): array([-0.02354434, -0.7863082 ]), (10, 8, False): array([-0.54466859, -0.06707734]), (17, 8, False): array([-0.38211382, -0.63884205]), (16, 1, True): array([-0.73109244, -0.54037267]), (12, 1, False): array([-0.78040904, -0.62514552]), (18, 1, False): array([-0.37555556, -0.78214286]), (7, 1, False): array([-0.75609756, -0.63636364]), (17, 1, False): array([-0.62377317, -0.75321704]), (19, 1, False): array([-0.11542793, -0.82076503]), (15, 9, True): array([-0.53571429, -0.25238095]), (15, 9, False): array([-0.53896817, -0.58893617]), (8, 4, False): array([-0.24166667, -0.36259143]), (15, 4, False): array([-0.20730397, -0.60617426]), (7, 10, False): array([-0.54345006, -0.51963351]), (16, 10, False): array([-0.56939704, -0.67970964]), (17, 10, False): array([-0.47903706, -0.70367561]), (14, 2, True): array([-0.2605042 , -0.16262136]), (17, 4, False): array([-0.06966292, -0.65305556]), (20, 4, False): array([ 0.64067865, -0.86260163]), (14, 10, False): array([-0.54748603, -0.62728762]), (12, 10, True): array([-0.6 , -0.27155727]), (12, 10, False): array([-0.60892543, -0.57047521]), (20, 8, True): array([ 0.80203046, -0.02531646]), (10, 7, False): array([-0.50810811, -0.06078824]), (20, 7, False): array([ 0.77461191, -0.85928854]), (14, 5, False): array([-0.14898178, -0.53626499]), (8, 10, False): array([-0.59959759, -0.52028265]), (10, 5, False): array([-0.14613181, -0.13244569]), (12, 5, False): array([-0.12630359, -0.44570779]), (15, 5, False): array([-0.21990741, -0.5719385 ]), (20, 8, False): array([ 0.78475858, -0.9036805 ]), (21, 10, True): array([ 0.89085209, -0.19428238]), (7, 5, False): array([-0.21787709, -0.37418514]), (16, 5, False): array([-0.15473441, -0.63506213]), (20, 5, False): array([ 0.67193516, -0.85881435]), (19, 4, False): array([ 0.40886147, -0.7900232 ]), (16, 4, False): array([-0.18151071, -0.63368007]), (14, 8, False): array([-0.51219512, -0.51641625]), (18, 8, False): array([ 0.12430011, -0.64636488]), (13, 1, False): array([-0.76497696, -0.68534107]), (20, 3, True): array([ 0.59906396, -0.01298701]), (16, 9, False): array([-0.55855856, -0.60599272]), (9, 10, False): array([-0.61645963, -0.3814994 ]), (21, 8, True): array([ 0.92807825, -0.11685393]), (5, 9, False): array([-0.52941176, -0.44186047]), (7, 9, False): array([-0.65048544, -0.46701847]), (15, 8, False): array([-0.5389755 , -0.59509537]), (13, 3, False): array([-0.24914676, -0.53016241]), (12, 2, False): array([-0.30880713, -0.46451795]), (19, 2, False): array([ 0.383291 , -0.77777778]), (14, 4, False): array([-0.15811966, -0.54819945]), (17, 3, False): array([-0.16238438, -0.64948165]), (9, 1, False): array([-0.78313253, -0.46695279]), (11, 1, False): array([-0.79295154, -0.29883721]), (10, 3, False): array([-0.28686327, -0.08344924]), (15, 3, False): array([-0.2462077 , -0.58311874]), (13, 6, False): array([-0.18510158, -0.46608443]), (19, 6, False): array([ 0.50857775, -0.7636787 ]), (14, 2, False): array([-0.28663793, -0.55986696]), (20, 1, False): array([ 0.14997948, -0.92776886]), (19, 8, False): array([ 0.59561921, -0.73341232]), (17, 5, True): array([ 0.00793651, -0.29389313]), (18, 5, False): array([ 0.17037862, -0.68300744]), (13, 7, False): array([-0.49771689, -0.47274781]), (5, 4, False): array([-0.20408163, -0.40059347]), (17, 6, False): array([ 0.03184713, -0.64955417]), (18, 6, False): array([ 0.29013761, -0.68201171]), (21, 6, False): array([ 0.89723502, -1. ]), (10, 9, False): array([-0.54285714, -0.14257294]), (20, 9, False): array([ 0.74937052, -0.91990847]), (13, 8, False): array([-0.49943883, -0.49269663]), (12, 6, False): array([-0.11883408, -0.42411885]), (16, 8, False): array([-0.46231721, -0.59521824]), (20, 6, False): array([ 0.70729348, -0.8984252 ]), (13, 4, False): array([-0.19087635, -0.49434619]), (9, 4, False): array([-0.25974026, -0.12694513]), (21, 1, True): array([ 0.63913043, -0.30162413]), (14, 4, True): array([-0.11111111, -0.10152284]), (8, 7, False): array([-0.47933884, -0.34862385]), (15, 7, False): array([-0.45050056, -0.55552408]), (21, 9, False): array([ 0.94707521, -1. ]), (9, 2, False): array([-0.37857143, -0.17897271]), (11, 2, False): array([-0.27944573, -0.0815187 ]), (16, 8, True): array([-0.5 , -0.25490196]), (19, 8, True): array([ 0.59621451, -0.17177914]), (15, 6, False): array([-0.17686318, -0.53936444]), (19, 4, True): array([0.47386172, 0.00729927]), (14, 10, True): array([-0.5863747 , -0.34513844]), (21, 10, False): array([ 0.89215799, -1. ]), (15, 4, True): array([-0.25 , -0.17675545]), (15, 8, True): array([-0.57407407, -0.26696833]), (14, 9, False): array([-0.5323496 , -0.55392434]), (10, 10, False): array([-0.58831711, -0.24770642]), (17, 8, True): array([-0.3880597 , -0.33402062]), (18, 4, False): array([ 0.0994709 , -0.68127148]), (21, 2, True): array([0.87407407, 0.03174603]), (6, 3, False): array([-0.30496454, -0.30360531]), (16, 3, False): array([-0.25596529, -0.61028379]), (20, 3, False): array([ 0.66327569, -0.87252573]), (18, 6, True): array([ 0.2238806 , -0.23956443]), (15, 2, False): array([-0.36200717, -0.58544653]), (21, 3, True): array([0.88245718, 0.00672646]), (8, 2, False): array([-0.43426295, -0.39585492]), (17, 2, False): array([-0.17290749, -0.66583954]), (10, 2, False): array([-0.28291317, -0.06538735]), (20, 2, False): array([ 0.6450772 , -0.87153053]), (7, 8, False): array([-0.4741784 , -0.41810919]), (13, 5, False): array([-0.15592028, -0.49942363]), (20, 2, True): array([ 0.61728395, -0.08641975]), (21, 7, True): array([ 0.92721893, -0.06308411]), (12, 9, False): array([-0.49318182, -0.50414404]), (5, 10, False): array([-0.52542373, -0.52316991]), (5, 5, False): array([-0.20833333, -0.39650146]), (9, 5, False): array([-0.23129252, -0.2111588 ]), (11, 8, False): array([-0.5403423 , -0.01240135]), (21, 8, False): array([ 0.93197279, -1. ]), (21, 7, False): array([ 0.92095588, -1. ]), (14, 8, True): array([-0.57446809, -0.15422886]), (18, 8, True): array([ 0.12121212, -0.26526718]), (19, 9, True): array([ 0.28314239, -0.12056738]), (15, 5, True): array([ 0.06122449, -0.21985816]), (17, 5, False): array([-0.07580478, -0.6504543 ]), (11, 6, False): array([-0.2853598, -0.0472028]), (12, 3, False): array([-0.21710526, -0.47881234]), (14, 3, False): array([-0.29782609, -0.51804794]), (11, 3, False): array([-0.24716553, -0.08545994]), (21, 3, False): array([ 0.88686131, -1. ]), (8, 9, False): array([-0.57735849, -0.44034918]), (13, 9, False): array([-0.47716895, -0.51426102]), (18, 1, True): array([-0.25 , -0.55008787]), (19, 1, True): array([-0.18932874, -0.37267081]), (4, 5, False): array([-0.32 , -0.21468927]), (16, 7, True): array([-0.45762712, -0.33035714]), (14, 7, False): array([-0.47039106, -0.52589413]), (18, 7, False): array([ 0.43946188, -0.67284127]), (14, 6, False): array([-0.14853801, -0.50586592]), (15, 1, False): array([-0.7878453 , -0.69723476]), (18, 3, False): array([ 0.17208814, -0.69175229]), (9, 7, False): array([-0.43150685, -0.10677291]), (19, 7, False): array([ 0.62714728, -0.76705142]), (16, 6, False): array([-0.15243243, -0.58504725]), (12, 8, False): array([-0.5440658 , -0.43040502]), (16, 1, False): array([-0.80493274, -0.73622705]), (7, 4, False): array([-0.27868852, -0.36739974]), (11, 4, False): array([-0.25977011, -0.07357449]), (18, 2, False): array([ 0.13535589, -0.68955142]), (13, 2, False): array([-0.28641975, -0.5136153 ]), (21, 2, False): array([ 0.88240456, -1. ]), (11, 10, False): array([-0.56797235, -0.17262079]), (6, 10, False): array([-0.57973734, -0.53222997]), (12, 4, False): array([-0.15914489, -0.46503497]), (5, 8, False): array([-0.51282051, -0.38964578]), (11, 7, False): array([-0.46910755, -0.04553519]), (17, 9, False): array([-0.42793296, -0.66802168]), (18, 9, False): array([-0.20153341, -0.67536705]), (21, 4, True): array([ 0.89699074, -0.02597403]), (19, 10, True): array([ 0.00922045, -0.31282051]), (9, 3, False): array([-0.25443787, -0.19393939]), (9, 6, False): array([-0.125 , -0.12334437]), (21, 6, True): array([ 0.90399556, -0.01354402]), (13, 6, True): array([-0.2038835, -0.0704607]), (13, 4, True): array([-0.12380952, -0.19726027]), (10, 4, False): array([-0.23785166, -0.10334996]), (12, 7, False): array([-0.49417249, -0.44953864]), (16, 7, False): array([-0.44258873, -0.59820929]), (4, 3, False): array([-0.30232558, -0.25842697]), (15, 3, True): array([-0.19230769, -0.21040724]), (10, 1, False): array([-0.73796791, -0.40983607]), (17, 10, True): array([-0.43021033, -0.40089419]), (21, 5, True): array([ 0.89716312, -0.07875895]), (21, 5, False): array([ 0.89683631, -1. ]), (11, 5, False): array([-0.22969838, -0.05436081]), (10, 6, False): array([-0.1747851 , -0.07989348]), (6, 2, False): array([-0.29133858, -0.44547135]), (16, 2, False): array([-0.28555678, -0.6132835 ]), (5, 2, False): array([-0.35416667, -0.37837838]), (17, 4, True): array([-0.11486486, -0.21399177]), (20, 10, True): array([ 0.44159876, -0.27145359]), (21, 1, False): array([ 0.65814394, -1. ]), (7, 2, False): array([-0.17 , -0.44041451]), (8, 5, False): array([-0.11790393, -0.3808554 ]), (19, 2, True): array([ 0.39527027, -0.25503356]), (13, 10, True): array([-0.63835616, -0.34287617]), (16, 10, True): array([-0.57356077, -0.35966298]), (19, 5, False): array([ 0.45803899, -0.77586207]), (19, 9, False): array([ 0.27952209, -0.76234214]), (14, 1, False): array([-0.72108844, -0.69859402]), (12, 3, True): array([-0.44444444, -0.1030303 ]), (13, 9, True): array([-0.5257732 , -0.17380353]), (16, 9, True): array([-0.50442478, -0.21325052]), (18, 9, True): array([-0.2605042 , -0.32675045]), (11, 9, False): array([-0.52112676, -0.08071217]), (20, 1, True): array([ 0.11564626, -0.21387283]), (19, 3, False): array([ 0.39844872, -0.8034188 ]), (17, 7, False): array([-0.11283186, -0.62920723]), (15, 10, True): array([-0.52297593, -0.41401972]), (18, 10, True): array([-0.1780303 , -0.41648007]), (7, 3, False): array([-0.15083799, -0.41888298]), (17, 2, True): array([-0.14864865, -0.3125 ]), (6, 1, False): array([-0.74358974, -0.63088512]), (18, 7, True): array([ 0.3557047 , -0.19148936]), (15, 7, True): array([-0.38596491, -0.15742794]), (17, 1, True): array([-0.69918699, -0.46575342]), (19, 6, True): array([ 0.50162338, -0.07857143]), (18, 5, True): array([ 0.24358974, -0.28911565]), (21, 4, False): array([ 0.88903986, -1. ]), (4, 4, False): array([-0.33333333, -0.10059172]), (21, 9, True): array([ 0.9329092 , -0.08742004]), (7, 6, False): array([-0.17277487, -0.39415042]), (17, 9, True): array([-0.29545455, -0.26961771]), (9, 9, False): array([-0.49006623, -0.25655738]), (12, 1, True): array([-0.73684211, -0.45856354]), (16, 6, True): array([-0.08148148, -0.19333333]), (8, 8, False): array([-0.53051643, -0.36457261]), (9, 8, False): array([-0.57933579, -0.16852966]), (17, 7, True): array([-0.11764706, -0.26367188]), (4, 10, False): array([-0.64516129, -0.52949246]), (14, 6, True): array([-0.24752475, -0.03722084]), (6, 7, False): array([-0.52671756, -0.38790036]), (4, 7, False): array([-0.48148148, -0.37688442]), (6, 9, False): array([-0.63265306, -0.4973638 ]), (12, 7, True): array([-0.57894737, -0.23626374]), (16, 5, True): array([-0.20338983, -0.19795918]), (19, 5, True): array([ 0.46746575, -0.0141844 ]), (14, 1, True): array([-0.82795699, -0.57754011]), (16, 3, True): array([-0.23076923, -0.22933884]), (4, 8, False): array([-0.5 , -0.58282209]), (19, 3, True): array([ 0.40286624, -0.04477612]), (20, 6, True): array([ 0.67197452, -0.05063291]), (6, 8, False): array([-0.47887324, -0.47306397]), (8, 1, False): array([-0.86178862, -0.60784314]), (20, 4, True): array([ 0.68195719, -0.10344828]), (15, 6, True): array([-0.13207547, -0.11027569]), (20, 7, True): array([ 0.76589147, -0.05333333]), (19, 7, True): array([ 0.66227348, -0.06410256]), (13, 7, True): array([-0.47126437, -0.06395349]), (20, 9, True): array([ 0.78571429, -0.15428571]), (18, 4, True): array([ 0.25 , -0.2425829]), (15, 1, True): array([-0.73831776, -0.43023256]), (16, 2, True): array([-0.44954128, -0.24007937]), (8, 3, False): array([-0.36150235, -0.40576725]), (17, 3, True): array([-0.16541353, -0.25726141]), (8, 6, False): array([-0.18699187, -0.34024896]), (14, 7, True): array([-0.64 , -0.14150943]), (6, 5, False): array([-0.23076923, -0.29588015]), (13, 8, True): array([-0.46666667, -0.21899736]), (17, 6, True): array([ 0.096 , -0.23651452]), (14, 5, True): array([ 0.06976744, -0.15815085]), (7, 7, False): array([-0.46464646, -0.42034806]), (18, 2, True): array([ 0.13605442, -0.29615385]), (18, 3, True): array([ 0.14285714, -0.27195946]), (20, 5, True): array([ 0.66839378, -0.01948052]), (6, 6, False): array([-0.2238806 , -0.30852995]), (13, 2, True): array([-0.35416667, -0.23545706]), (14, 9, True): array([-0.61403509, -0.21465969]), (5, 6, False): array([-0.11111111, -0.3601108 ]), (13, 1, True): array([-0.81052632, -0.36312849]), (16, 4, True): array([-0.31034483, -0.2043956 ]), (15, 2, True): array([-0.30508475, -0.16876574]), (14, 3, True): array([-0.31182796, -0.23896104]), (13, 5, True): array([-0.31111111, -0.03816794]), (5, 3, False): array([-0.33962264, -0.36986301]), (13, 3, True): array([-0.2952381 , -0.20689655]), (4, 6, False): array([-0.07692308, -0.3699422 ]), (5, 1, False): array([-0.69072165, -0.59556787]), (12, 5, True): array([-0.2173913 , -0.02197802]), (4, 1, False): array([-0.83333333, -0.56424581]), (6, 4, False): array([-0.33333333, -0.37372014]), (4, 2, False): array([-0.37931034, -0.40298507]), (4, 9, False): array([-0.42105263, -0.51020408]), (5, 7, False): array([-0.52688172, -0.40860215]), (12, 4, True): array([-0.20930233, -0.02285714]), (12, 8, True): array([-0.6097561 , -0.23353293]), (12, 6, True): array([-0.06666667, -0.07185629]), (12, 2, True): array([-0.39534884, -0.15846995]), (12, 9, True): array([-0.59183673, -0.04651163])})
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
import random
epsilon = 1.0
start = 1.0
end = 0.05
epsilon_decay = .008
def Greedyepsilon(iteration):
global epsilon, start, end, epsilon_decay
epsilon = end + (start - end)*np.exp(-iteration*epsilon_decay)
return epsilon
def generate_episode(iteration, env, Q):
epsilon = Greedyepsilon(iteration)
episode = []
state = env.reset()
while True:
if random.random() <= epsilon:
#explore
action = env.action_space.sample()
else:
#exploit
#action = np.argmax(Q[state][:])
action = np.random.choice(np.arange(2), p = get_probs(Q[state], epsilon,2))
next_state, rewards, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def generate_episode(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q_table(Q, episode, alpha, gamma):
states, actions, rewards = zip(*episode)
discount = np.array([gamma**i for i in range(len(rewards)+1)])
#print("episode", episode)
for i,state in enumerate(states):
#print(Q[state])
Q[state][actions[i]] = Q[state][actions[i]] + alpha*(sum(rewards[i:]*discount[:-(i+1)])-Q[state][actions[i]])
#print(Q[state])
#print()
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0,eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epsilon = max(epsilon*eps_decay, eps_min)
episode = generate_episode(env, Q, epsilon, nA)
Q = update_Q_table(Q, episode, alpha, gamma)
## TODO: complete the function
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 50000, 0.03)
np.arange(3)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v1')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(16, 9, False)
(20, 9, False)
End game! Reward: 1.0
You won :)
(9, 10, False)
(16, 10, False)
End game! Reward: 1.0
You won :)
(12, 7, True)
(19, 7, True)
(12, 7, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(10):
print(generate_episode_from_limit_stochastic(env))
###Output
[((21, 4, True), 0, 1.0)]
[((12, 10, False), 1, -1.0)]
[((14, 10, False), 1, -1.0)]
[((6, 3, False), 1, 0.0), ((16, 3, False), 0, 1.0)]
[((4, 1, False), 0, -1.0)]
[((18, 1, False), 1, -1.0)]
[((20, 10, False), 0, 0.0)]
[((8, 10, False), 1, 0.0), ((18, 10, False), 1, -1.0)]
[((12, 2, False), 1, 0.0), ((13, 2, False), 0, -1.0)]
[((12, 10, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
# This method will return the reward of the first timestep, plus the discounted rewards of any future timesteps.
def discounted_rewards_in_episode(episode, gamma=1.0):
# Accumulate rewards at this state/action pair by adding the reward in this frame,
# plus the discounted sum of rewards for future frames.
rewards_from_state = 0.0
for t, state_action_reward in enumerate(episode):
t_reward = state_action_reward[2]
if t == 0:
# Append the reward if this is the first element.
rewards_from_state += t_reward
else:
# Else append the discounted reward.
rewards_from_state += gamma * t_reward
#print("rewards_from_state: ", rewards_from_state)
return rewards_from_state
# Test the code above
def test_discounted_rewards_in_episode():
episode = [((13, 4, False), 1, 0.0), ((16, 4, False), 1, 0.0), ((17, 4, False), 1, 0.0), ((21, 4, False), 0, 1.0)]
for i in range(4):
sliced_episode = episode[i:]
discounted_rewards = discounted_rewards_in_episode(sliced_episode)
print("sliced_episode: ", sliced_episode)
print("discounted_rewards: ", discounted_rewards)
test_discounted_rewards_in_episode()
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# Generate an episode
episode = generate_episode(env)
#print("episode: ", episode)
# Iterate over each timestep
for t, stateActionReward in enumerate(episode):
# Each timestep contains state, action, reward
state = stateActionReward[0]
action = stateActionReward[1]
reward = stateActionReward[2]
#print("state: ", state)
#print("action: ", action)
#print("reward: ", reward)
# Accumulate rewards at this state/action pair by adding the reward in this frame,
# plus the discounted sum of rewards for future frames.
rewards_from_state = discounted_rewards_in_episode(episode[t:], gamma)
# Add to total rewards at state/action
returns_sum[state][action] += rewards_from_state
# Increment total visits at state/action
N[state][action] += 1.0
# Assign Q value (average reward for state/action)
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
#Q = mc_prediction_q(env, 5, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def decay_epsilon(eps_current, eps_decay, eps_min):
eps_decayed = eps_current * eps_decay
return eps_decayed if eps_decayed > eps_min else eps_min
def test_decay_epsilon():
for eps_decay in [1, 0.5, 0.3, 0.1, 0.01]:
eps_decayed = decay_epsilon(1.0, eps_decay, 0.1)
print("eps_decayed(1.0, {eps_decay}, 0.1): {eps_decayed}".format(**locals()))
test_decay_epsilon()
def get_action_with_highest_reward(actions):
# Initialize best_action and reward to the first index.
best_action = 0
reward = actions[0]
for i in range(len(actions)):
if actions[i] > reward:
reward = actions[i]
best_action = i
return best_action
def test_get_action_with_highest_reward():
stateActionDict = { 3: [-0.9, -1.0], 6: [-0.4, -0.2], 12: [1, 10], 16: [3, 8], 20: [10, 1] }
for i, key in enumerate(stateActionDict):
best_action = get_action_with_highest_reward(stateActionDict[key])
print("key: ", key, ", best action: ", best_action)
test_get_action_with_highest_reward()
def get_probabilities_epsilon_greedy(epsilon):
# 1 - epsilon = action with highest rewards for the state
# epsilon = random action
# probability for random action will be 1 / numPossibleActions (which is 2 for blackjack -- stay and hold)
# Return an array of 3 elements, in terms of probability of choosing the following actions:
# 1. best current action
# 2. stick (0)
# 3. hit (1)
return [1.0 - epsilon, epsilon / 2, epsilon / 2]
def test_get_probabilities_epsilon_greedy():
for epsilon in [1.0, 0.7, 0.3, 0.1]:
print("epsilon: ", epsilon, ", probs: ", get_probabilities_epsilon_greedy(epsilon))
test_get_probabilities_epsilon_greedy()
def choose_action_epsilon_greedy(Q, state, epsilon):
probs = get_probabilities_epsilon_greedy(epsilon)
# Get current best action for the state
best_action = get_action_with_highest_reward(Q[state])
# Choose an action, based on epsilon probability
action = np.random.choice(np.array([best_action, 0, 1]), p=probs)
return action
def test_choose_action_epsilon_greedy(epsilon):
# Run a particular state a few times, with differing epsilons, to be satisfied that it's choosing
# the best action for a particular state, with respect to epsilon.
state = 12
state_action_dict = { state: [2, 10] }
action = choose_action_epsilon_greedy(state_action_dict, state, epsilon)
print("Action: ", action)
# Test 10 times with epsilon 1.0. Should be very random between 0 and 1. Around 50/50, but random.
print("Testing choose_action_epsilon_greedy with epsilon 1.0")
for i in range(10):
test_choose_action_epsilon_greedy(1.0)
# Test 10 times with epsilon 0.5. Should be HIT (1) approximatly 75% of the time.
print("Testing choose_action_epsilon_greedy with epsilon 0.5")
for i in range(10):
test_choose_action_epsilon_greedy(0.5)
# Test 10 times with epsilon 0.1. Should be HIT (1) approximatly 90% of the time.
print("Testing choose_action_epsilon_greedy with epsilon 0.1")
for i in range(10):
test_choose_action_epsilon_greedy(0.1)
# Test 10 times with epsilon 0.0. Should be HIT (1) 100% of the time.
print("Testing choose_action_epsilon_greedy with epsilon 0.0")
for i in range(10):
test_choose_action_epsilon_greedy(0.0)
def generate_episode_epsilon_greedy(env, epsilon, Q):
episode = []
state = env.reset()
while True:
action = choose_action_epsilon_greedy(Q, state, epsilon)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_policy_from_Q_table(Q):
policy = {}
for i, state in enumerate(Q):
best_action = get_action_with_highest_reward(Q[state])
policy[state] = best_action
return policy
def test_get_policy_from_Q_table():
mydict = { 12: [2, 8], 15: [4, 6], 19: [9, 1], 21: [10, 0] }
policy = get_policy_from_Q_table(mydict)
print("policy: ", policy)
test_get_policy_from_Q_table();
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# For debugging purposes
N = defaultdict(lambda: np.zeros(env.action_space.n))
# Initialize epsilon
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# Decay epsilon
if i_episode > 1:
epsilon = decay_epsilon(epsilon, eps_decay, eps_min)
#print("epsilon: ", epsilon)
# Generate an episode
episode = generate_episode_epsilon_greedy(env, epsilon, Q)
#print("episode: ", episode)
# Iterate over each timestep
for t, stateActionReward in enumerate(episode):
#print("t: ", t)
# Each timestep contains state, action, reward
state = stateActionReward[0]
action = stateActionReward[1]
reward = stateActionReward[2]
#print("state: ", state)
#print("action: ", action)
#print("reward: ", reward)
# What is our current stored Q value for this particular state and action?
q_stateaction = Q[state][action]
#print("q_stateaction: ", q_stateaction)
#if N[state][action] > 0:
# print("Episode contains a state to update: ", episode)
# print("Updating state: ", state, ", action: ", action, ", existing_q_value: ", q_stateaction, ", reward: ", reward)
# Get discounted total rewards in episode from this timestep forward.
rewards_from_state = discounted_rewards_in_episode(episode[t:], gamma)
#print("rewards_from_state: ", rewards_from_state)
# Update our Q Table for this particular state and action.
new_q_value = q_stateaction + alpha * (rewards_from_state - q_stateaction)
#print("new_q_value: ", new_q_value)
#if N[state][action] > 0:
# print("Updating state: ", state, ", action: ", action, ", rewards_from_state: ", rewards_from_state, " with new_q_value: ", new_q_value)
# Update Q value
Q[state][action] = new_q_value
#print("Q[", state, "][", action, "]: ", Q[state][action])
#if N[state][action] > 0:
# print("New Q Value for state: ", state, ", action: ", action, ": ", Q[state][action])
# print("\n")
# Update N for state action, for debugging.
N[state][action] += 1
policy = get_policy_from_Q_table(Q)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.01)
# Debugging implementation by printing out our dictionaries.
print("Final Q Table:")
for i, state in enumerate(Q):
print("State: ", state, "; STICK: ", Q[state][0], ", HIT: ", Q[state][1])
print("\n\nFinal Policy:")
for i, state in enumerate(policy):
print("State: ", state, ": Action: ", "STICK" if policy[state] == 0 else "HIT")
###Output
Episode 500000/500000.Final Q Table:
State: (14, 10, False) ; STICK: -0.6690393203477828 , HIT: -0.47116921589689553
State: (15, 10, False) ; STICK: -0.7080916316244705 , HIT: -0.5542437098070512
State: (12, 10, False) ; STICK: -0.497238134475551 , HIT: -0.47342582430350133
State: (12, 1, False) ; STICK: -0.7595688561558781 , HIT: -0.4310906961720102
State: (10, 6, False) ; STICK: -0.12127838198307941 , HIT: 0.22647016801870837
State: (16, 10, True) ; STICK: -0.5864769811644801 , HIT: -0.278472556872247
State: (12, 10, True) ; STICK: -0.31458808989477816 , HIT: -0.24025766007450391
State: (21, 10, True) ; STICK: 0.9523933698105074 , HIT: -0.030620537116798203
State: (19, 2, False) ; STICK: 0.3689032683144463 , HIT: -0.7084779081842559
State: (11, 3, False) ; STICK: -0.26388940560216667 , HIT: 0.28323113372322634
State: (13, 3, False) ; STICK: -0.40482872834101585 , HIT: -0.3923751917428821
State: (12, 9, False) ; STICK: -0.5557608953879702 , HIT: -0.2659247755573426
State: (13, 9, False) ; STICK: -0.6279770605216451 , HIT: -0.43724357539730574
State: (18, 7, False) ; STICK: 0.3958208805157461 , HIT: -0.6202955652294595
State: (21, 3, False) ; STICK: 0.9005636824651306 , HIT: -0.706577278474784
State: (20, 10, False) ; STICK: 0.4698737651821744 , HIT: -0.8963384668145336
State: (21, 10, False) ; STICK: 0.8442490372848392 , HIT: -0.9993617917037569
State: (15, 7, True) ; STICK: -0.23132911713736284 , HIT: -0.04377496482606765
State: (17, 7, False) ; STICK: -0.07541595305510743 , HIT: -0.39470149613487077
State: (13, 10, False) ; STICK: -0.5979062840402417 , HIT: -0.334110834410592
State: (18, 4, False) ; STICK: 0.09365520491959488 , HIT: -0.6041175315195103
State: (15, 5, False) ; STICK: -0.30964237774609316 , HIT: -0.3189240702140442
State: (8, 8, False) ; STICK: -0.39987925895199034 , HIT: -0.019578183428826782
State: (10, 7, False) ; STICK: -0.44906367138314385 , HIT: 0.2789522942513284
State: (13, 7, False) ; STICK: -0.5281460638911504 , HIT: -0.25598564393085604
State: (16, 7, False) ; STICK: -0.5529047172864484 , HIT: -0.48598111255426807
State: (15, 1, False) ; STICK: -0.7601517864626048 , HIT: -0.6164902938223592
State: (20, 9, False) ; STICK: 0.7229002248042611 , HIT: -0.8398573170565632
State: (20, 7, False) ; STICK: 0.7893999926601802 , HIT: -0.8600416260081432
State: (14, 3, False) ; STICK: -0.36493702513501924 , HIT: -0.4763777562092528
State: (12, 7, False) ; STICK: -0.5090143315849521 , HIT: -0.12571980732820537
State: (14, 10, True) ; STICK: -0.5530701390771354 , HIT: -0.3165787172624941
State: (15, 2, False) ; STICK: -0.22906361114222923 , HIT: -0.3932187044524423
State: (17, 10, False) ; STICK: -0.5653014195448083 , HIT: -0.6486888558759366
State: (6, 10, False) ; STICK: -0.5673018006524951 , HIT: -0.507155528327701
State: (20, 1, False) ; STICK: 0.22749035279684013 , HIT: -0.9238059332676563
State: (19, 8, False) ; STICK: 0.5779518071253089 , HIT: -0.6502459644595587
State: (13, 4, True) ; STICK: -0.08558792044784154 , HIT: 0.08480085640116891
State: (20, 4, False) ; STICK: 0.6416435050036272 , HIT: -0.907361350695985
State: (12, 8, False) ; STICK: -0.5308772739320508 , HIT: -0.3269253325687798
State: (21, 8, False) ; STICK: 0.9195816577247292 , HIT: -0.8170434816909389
State: (15, 3, False) ; STICK: -0.2466848718230949 , HIT: -0.45952493623081864
State: (10, 9, False) ; STICK: -0.41757318851692393 , HIT: -0.06855378191267955
State: (16, 6, True) ; STICK: -0.05520160043300167 , HIT: 0.10614427259456585
State: (16, 6, False) ; STICK: -0.13196911398259137 , HIT: -0.5788377469487502
State: (19, 7, False) ; STICK: 0.6830440501313931 , HIT: -0.8609898439148804
State: (13, 2, False) ; STICK: -0.4391821592353704 , HIT: -0.2752658107063544
State: (17, 5, False) ; STICK: -0.17206868912262557 , HIT: -0.666280328982618
State: (19, 5, False) ; STICK: 0.38542390639123497 , HIT: -0.6452742706977294
State: (14, 7, False) ; STICK: -0.540891226001868 , HIT: -0.4046651133722138
State: (21, 3, True) ; STICK: 0.9825551385399124 , HIT: 0.24663706452411868
State: (12, 6, False) ; STICK: -0.15132688791230364 , HIT: -0.2984340031196244
State: (19, 6, False) ; STICK: 0.45850688346232216 , HIT: -0.5853539968309202
State: (11, 10, False) ; STICK: -0.6673679837031428 , HIT: -0.0601852888930892
State: (18, 5, False) ; STICK: 0.28818370872977817 , HIT: -0.6609524948083151
State: (9, 6, False) ; STICK: -0.16011744051551008 , HIT: 0.08824786429613414
State: (16, 5, False) ; STICK: -0.22174158341088698 , HIT: -0.4627437986515894
State: (21, 5, False) ; STICK: 0.8547472951733434 , HIT: -0.7181393044595363
State: (21, 6, True) ; STICK: 0.9936926624026919 , HIT: 0.1391379178195697
State: (7, 4, False) ; STICK: -0.29431084777707384 , HIT: -0.05175623704267623
State: (18, 10, False) ; STICK: -0.3019223381500035 , HIT: -0.7562347936841967
State: (8, 6, False) ; STICK: -0.06748925739202402 , HIT: 0.2016183737286589
State: (18, 6, False) ; STICK: 0.1892017379840724 , HIT: -0.6626414994012559
State: (16, 10, False) ; STICK: -0.592614380351366 , HIT: -0.7037226321167886
State: (21, 2, True) ; STICK: 0.9700092441758177 , HIT: 0.12832247713493528
State: (17, 2, False) ; STICK: -0.15399702867080256 , HIT: -0.6031106199344267
State: (20, 2, False) ; STICK: 0.6988424695939977 , HIT: -0.8970635999030075
State: (11, 2, False) ; STICK: -0.3812305794355718 , HIT: 0.27870049174453615
State: (19, 10, False) ; STICK: -0.01958136468972381 , HIT: -0.711298472275617
State: (8, 5, False) ; STICK: -0.1224988430393956 , HIT: 0.09150857705381438
State: (13, 5, False) ; STICK: -0.17068927670455433 , HIT: -0.3454830259017007
State: (15, 4, False) ; STICK: -0.2885700522682426 , HIT: -0.4488709447315645
State: (15, 9, False) ; STICK: -0.6007590355260379 , HIT: -0.6175246279221994
State: (21, 9, False) ; STICK: 0.9469338664027339 , HIT: -0.7915075382652385
State: (17, 2, True) ; STICK: -0.09907311927415596 , HIT: -0.054356420835475185
State: (9, 3, False) ; STICK: -0.12969723778140219 , HIT: 0.13647884415718298
State: (19, 4, False) ; STICK: 0.3981129328521563 , HIT: -0.8005076836969509
State: (18, 9, False) ; STICK: -0.23491652099978863 , HIT: -0.6343138276267718
State: (13, 6, False) ; STICK: -0.15417523600944105 , HIT: -0.23688706735041304
State: (20, 6, False) ; STICK: 0.689578438022588 , HIT: -0.8279043443410117
State: (10, 1, False) ; STICK: -0.6039522577557822 , HIT: -0.32873977908034746
State: (19, 3, True) ; STICK: 0.4235517969268365 , HIT: -0.025016239758512398
State: (14, 1, False) ; STICK: -0.7548164949656834 , HIT: -0.5832409869834078
State: (20, 3, False) ; STICK: 0.5894042190003113 , HIT: -0.880219886370995
State: (16, 2, False) ; STICK: -0.3102073448381471 , HIT: -0.5850524962559758
State: (21, 1, True) ; STICK: 0.7266139357806587 , HIT: -0.16310972741195964
State: (19, 3, False) ; STICK: 0.4095241604237037 , HIT: -0.7226738701986383
State: (15, 8, False) ; STICK: -0.592331312628059 , HIT: -0.44111162582087066
State: (11, 8, False) ; STICK: -0.4661530504788135 , HIT: 0.20879229071109262
State: (21, 4, True) ; STICK: 0.9567663172032902 , HIT: 0.05309385896356955
State: (13, 4, False) ; STICK: -0.19517727849773786 , HIT: -0.41974656962261947
State: (9, 10, False) ; STICK: -0.5962403054153894 , HIT: -0.23267329533987371
State: (4, 4, False) ; STICK: -0.10827358000990944 , HIT: -0.0824427188602481
State: (14, 4, False) ; STICK: -0.22357207066885304 , HIT: -0.41223377878680983
State: (12, 1, True) ; STICK: -0.26333252982792354 , HIT: -0.22939446662449425
State: (18, 6, True) ; STICK: 0.19977566317273654 , HIT: 0.06381082028678171
State: (20, 8, False) ; STICK: 0.838400087438485 , HIT: -0.8821743576206542
State: (6, 6, False) ; STICK: -0.16551462277415618 , HIT: -0.012489281802291535
State: (14, 6, False) ; STICK: -0.1397924067895497 , HIT: -0.3873819581324068
State: (17, 3, False) ; STICK: -0.1453681231328924 , HIT: -0.5239347557140817
State: (12, 3, False) ; STICK: -0.1970288767004811 , HIT: -0.3713637200937061
State: (11, 1, False) ; STICK: -0.699934709639214 , HIT: -0.11180799706021288
State: (17, 1, False) ; STICK: -0.744526072819408 , HIT: -0.7023756673451402
State: (12, 2, False) ; STICK: -0.4365656682185628 , HIT: -0.34640998117991634
State: (18, 4, True) ; STICK: 0.10953916430119778 , HIT: -0.06783887419900787
State: (7, 3, False) ; STICK: -0.2969930723303906 , HIT: -0.07933666369599378
State: (7, 10, False) ; STICK: -0.6385547209907374 , HIT: -0.49033654379757236
State: (17, 8, False) ; STICK: -0.4159959554562636 , HIT: -0.5086182900297909
State: (17, 1, True) ; STICK: -0.5151788255127036 , HIT: -0.4183968624448324
State: (10, 10, False) ; STICK: -0.5607689247868856 , HIT: -0.04665885139898171
State: (15, 7, False) ; STICK: -0.6457457628128356 , HIT: -0.3965118269948406
State: (17, 10, True) ; STICK: -0.4677858759946145 , HIT: -0.212457519830529
State: (14, 5, False) ; STICK: -0.13228957148913162 , HIT: -0.37385318369293424
State: (8, 9, False) ; STICK: -0.43826266698940863 , HIT: -0.3515408018908904
State: (16, 8, False) ; STICK: -0.44428057374101976 , HIT: -0.6158897685615161
State: (19, 9, False) ; STICK: 0.26807522981884735 , HIT: -0.6630295052337796
State: (11, 6, False) ; STICK: -0.14285512693377425 , HIT: 0.26634869017246476
State: (21, 6, False) ; STICK: 0.8747902197747761 , HIT: -0.7915075382652385
State: (21, 7, False) ; STICK: 0.9613335312350316 , HIT: -0.8488470650468527
State: (12, 5, False) ; STICK: -0.17298678170114903 , HIT: -0.3195689719108689
State: (18, 8, False) ; STICK: 0.11900148186293724 , HIT: -0.5714794017135427
State: (11, 5, False) ; STICK: -0.19444863398393072 , HIT: 0.3078913300755808
State: (8, 2, False) ; STICK: -0.3148588969099246 , HIT: -0.08241765912029474
State: (20, 10, True) ; STICK: 0.4219156480097948 , HIT: -0.0883076720389742
State: (16, 3, False) ; STICK: -0.2568827378313109 , HIT: -0.5108186005634362
State: (9, 8, False) ; STICK: -0.4007051046236143 , HIT: 0.08920634226046809
State: (13, 8, False) ; STICK: -0.5915578040472597 , HIT: -0.23807778480725003
State: (20, 2, True) ; STICK: 0.7382453955475571 , HIT: 0.05678795802559956
State: (16, 4, True) ; STICK: -0.03405161252021893 , HIT: 0.046372529334451185
State: (19, 4, True) ; STICK: 0.34279983507311473 , HIT: 0.13046617096708413
State: (21, 4, False) ; STICK: 0.873588714838609 , HIT: -0.7319532830831258
State: (18, 3, False) ; STICK: 0.12426006925629235 , HIT: -0.6132805651522857
State: (18, 2, False) ; STICK: 0.17561266779122658 , HIT: -0.6759832630913312
State: (21, 5, True) ; STICK: 0.9747815704234247 , HIT: 0.15098371479742415
State: (6, 4, False) ; STICK: -0.25414286457627017 , HIT: -0.038513048021329856
State: (16, 4, False) ; STICK: -0.1591140174684055 , HIT: -0.4612049506923131
State: (17, 4, False) ; STICK: -0.02703120260910776 , HIT: -0.5242486913050137
State: (13, 1, False) ; STICK: -0.7475978464662985 , HIT: -0.5279366754892977
State: (6, 8, False) ; STICK: -0.36183881438468307 , HIT: -0.2573442872379594
State: (15, 6, True) ; STICK: -0.16835068849344292 , HIT: -0.15735359066413676
State: (20, 5, False) ; STICK: 0.7502955040023849 , HIT: -0.8750541203084117
State: (8, 10, False) ; STICK: -0.5496703001688253 , HIT: -0.37403262545018406
State: (14, 8, False) ; STICK: -0.6026741772906162 , HIT: -0.3411536577637136
State: (18, 8, True) ; STICK: 0.12006223975515999 , HIT: -0.06456200907688217
State: (19, 6, True) ; STICK: 0.5407887719770818 , HIT: 0.09364047619782755
State: (19, 9, True) ; STICK: 0.1834191116824007 , HIT: -0.014885758150405956
State: (12, 4, False) ; STICK: -0.10343805129398978 , HIT: -0.3881363874430486
State: (14, 2, False) ; STICK: -0.1531151967717206 , HIT: -0.4778161224834594
State: (21, 7, True) ; STICK: 0.990984165010642 , HIT: 0.08304620325057388
State: (8, 4, False) ; STICK: -0.19362583458816968 , HIT: -0.08235875715715643
State: (9, 1, False) ; STICK: -0.6190681947271728 , HIT: -0.3692307085661359
State: (19, 10, True) ; STICK: 0.04425318930578756 , HIT: -0.16281642005300265
State: (20, 6, True) ; STICK: 0.7629128585331625 , HIT: 0.05221704431043745
State: (4, 5, False) ; STICK: -0.08170129882102262 , HIT: -0.12880480675127773
State: (14, 1, True) ; STICK: -0.41809509461585176 , HIT: -0.3993667709331673
State: (8, 1, False) ; STICK: -0.6634299784899104 , HIT: -0.4163471080922329
State: (6, 7, False) ; STICK: -0.29056396345402924 , HIT: -0.0956531756189282
State: (7, 1, False) ; STICK: -0.6390149965905935 , HIT: -0.5720724274099082
State: (15, 10, True) ; STICK: -0.5833915614879536 , HIT: -0.2623826055741519
State: (15, 6, False) ; STICK: -0.2550037968668152 , HIT: -0.4001097441837955
State: (18, 9, True) ; STICK: -0.2401864369057395 , HIT: -0.08534563941261741
State: (16, 9, False) ; STICK: -0.6223372721957222 , HIT: -0.407279962589451
State: (11, 4, False) ; STICK: -0.21913454012662084 , HIT: 0.1801427590114613
State: (19, 2, True) ; STICK: 0.4067991412022823 , HIT: 0.11229960235436832
State: (9, 9, False) ; STICK: -0.3906872555879747 , HIT: -0.05374657438466551
State: (6, 5, False) ; STICK: -0.2437702079889003 , HIT: -0.16500600974928425
State: (13, 1, True) ; STICK: -0.42230763816489436 , HIT: -0.24088969790006595
State: (18, 1, True) ; STICK: -0.3395380425973151 , HIT: -0.44204233757759914
State: (10, 4, False) ; STICK: -0.09480590131262742 , HIT: 0.21285363332758914
State: (11, 9, False) ; STICK: -0.4520981536786026 , HIT: 0.0639337143392496
State: (18, 1, False) ; STICK: -0.3671965356471635 , HIT: -0.7210175096586369
State: (14, 4, True) ; STICK: -0.1400006160579019 , HIT: 0.07207349995454934
State: (9, 5, False) ; STICK: -0.1466044165794111 , HIT: 0.0747311655385809
State: (14, 7, True) ; STICK: -0.3108421383298411 , HIT: 0.04668253157337654
State: (16, 1, False) ; STICK: -0.8005152804968451 , HIT: -0.6408218891776182
State: (19, 1, False) ; STICK: -0.15654987135167422 , HIT: -0.7759930357044312
State: (17, 9, False) ; STICK: -0.4296697349821759 , HIT: -0.5179132912224211
State: (17, 5, True) ; STICK: -0.035389200793099644 , HIT: 0.12946262116268903
State: (21, 1, False) ; STICK: 0.5796252470927128 , HIT: -0.8847696612862868
State: (18, 10, True) ; STICK: -0.26943794621623224 , HIT: -0.35288990859757813
State: (16, 5, True) ; STICK: -0.09070582733491654 , HIT: 0.16854345154818298
State: (14, 6, True) ; STICK: -0.14309940995703685 , HIT: 0.013689575767249343
State: (7, 2, False) ; STICK: -0.26721773302205026 , HIT: -0.0156562565850458
State: (4, 6, False) ; STICK: -0.08610776752407034 , HIT: -0.015581473376379272
State: (18, 2, True) ; STICK: 0.2687718765249543 , HIT: 0.010007858634033043
State: (14, 9, False) ; STICK: -0.5597673502840048 , HIT: -0.4535051246172052
State: (12, 8, True) ; STICK: -0.09229642622391797 , HIT: 0.09326375809593092
State: (7, 9, False) ; STICK: -0.38158264229103445 , HIT: -0.31840214531939265
State: (15, 3, True) ; STICK: -0.20467186625032127 , HIT: -0.035477441118316494
State: (13, 10, True) ; STICK: -0.4339991724241778 , HIT: -0.10082533882412652
State: (6, 1, False) ; STICK: -0.6241785515985004 , HIT: -0.5091258632937843
State: (17, 7, True) ; STICK: -0.0761706558310156 , HIT: -0.12334367379478359
State: (11, 7, False) ; STICK: -0.40142515493203906 , HIT: 0.23405066664584395
State: (19, 1, True) ; STICK: -0.20984420651659133 , HIT: -0.21904924717125582
State: (18, 5, True) ; STICK: 0.2543466255499189 , HIT: 0.0054234810154451096
State: (7, 8, False) ; STICK: -0.4008335301573042 , HIT: -0.26289667200579286
State: (21, 8, True) ; STICK: 0.9933563315301419 , HIT: 0.21766757992381708
State: (12, 7, True) ; STICK: -0.13205058104471007 , HIT: 0.07804587487019779
State: (7, 5, False) ; STICK: -0.2739808646105304 , HIT: -0.07123379921137213
State: (5, 3, False) ; STICK: -0.2118235180546067 , HIT: -0.07355623018416027
State: (5, 7, False) ; STICK: -0.2585599265409609 , HIT: -0.2015513555276269
State: (7, 6, False) ; STICK: -0.1955026970860097 , HIT: 0.02777505418779378
State: (9, 7, False) ; STICK: -0.35462054743807375 , HIT: 0.10138817278844914
State: (12, 5, True) ; STICK: -0.037922785380023305 , HIT: 0.10782038898887726
State: (19, 5, True) ; STICK: 0.4027940398403436 , HIT: 0.06460893467666526
State: (21, 9, True) ; STICK: 0.9920163727991049 , HIT: 0.04511739038607065
State: (17, 6, True) ; STICK: -0.012077762570194788 , HIT: -0.06128586833685273
State: (17, 6, False) ; STICK: 0.07282959016119195 , HIT: -0.6697032169484128
State: (21, 2, False) ; STICK: 0.8743292610005454 , HIT: -0.7319532830831258
State: (10, 5, False) ; STICK: -0.09074212907210204 , HIT: 0.17651921358396605
State: (9, 2, False) ; STICK: -0.32282018725963724 , HIT: 0.21887278789638018
State: (14, 3, True) ; STICK: -0.23189069077489566 , HIT: -0.05033669043526472
State: (10, 3, False) ; STICK: -0.17880776859346176 , HIT: 0.05876828673958932
State: (15, 9, True) ; STICK: -0.3214731744865007 , HIT: -0.08109734484151654
State: (5, 5, False) ; STICK: -0.21989010362727632 , HIT: -0.025594457929883546
State: (14, 5, True) ; STICK: -0.09820999662285457 , HIT: 0.17777987812481183
State: (5, 10, False) ; STICK: -0.5182842168189568 , HIT: -0.4547201384888658
State: (13, 9, True) ; STICK: -0.2036641232657089 , HIT: -0.10940375279063161
State: (4, 1, False) ; STICK: -0.4730577212602933 , HIT: -0.472514202985226
State: (19, 7, True) ; STICK: 0.715686040781541 , HIT: 0.05947053525057019
State: (13, 7, True) ; STICK: -0.15839881312597826 , HIT: 0.10695053821045945
State: (20, 1, True) ; STICK: 0.07664765434985922 , HIT: -0.1537361846125782
State: (8, 7, False) ; STICK: -0.34468443041652935 , HIT: 0.0098197965487684
State: (13, 8, True) ; STICK: -0.2040975364557939 , HIT: 0.043189864941313244
State: (20, 7, True) ; STICK: 0.8076095808255025 , HIT: -0.0733819717020328
State: (14, 2, True) ; STICK: -0.18386522776730774 , HIT: 0.11151885090107425
State: (12, 6, True) ; STICK: -0.08965772435621511 , HIT: 0.16005030119440403
State: (16, 1, True) ; STICK: -0.4750793939061913 , HIT: -0.3301013236903497
State: (10, 8, False) ; STICK: -0.48047741924125253 , HIT: 0.24557608765098055
State: (9, 4, False) ; STICK: -0.15459164116752938 , HIT: 0.16667843644808397
State: (18, 7, True) ; STICK: 0.5227931918336783 , HIT: 0.03044634120978918
State: (10, 2, False) ; STICK: -0.2955839590869432 , HIT: 0.1503634112283825
State: (5, 2, False) ; STICK: -0.2231494311372431 , HIT: -0.12132739789180652
State: (16, 2, True) ; STICK: -0.21083354622634157 , HIT: 0.00130296593404834
State: (13, 3, True) ; STICK: -0.10110588991282561 , HIT: 0.18303179420887736
State: (4, 8, False) ; STICK: -0.32179342029638824 , HIT: -0.28807797897585546
State: (15, 4, True) ; STICK: -0.1857583735212534 , HIT: 0.013408978154625611
State: (18, 3, True) ; STICK: 0.19363288570147177 , HIT: -0.06882363686803351
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(15, 1, False)
(18, 1, False)
End game! Reward: 1.0
You won :)
(16, 7, True)
(19, 7, True)
(19, 7, False)
End game! Reward: 1.0
You won :)
(18, 9, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((13, 10, False), 1, -1)]
[((13, 8, True), 1, 0), ((19, 8, True), 0, 1.0)]
[((11, 10, False), 1, 0), ((16, 10, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate an episode
episode = generate_episode(env)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
print(Q)
###Output
defaultdict(<function mc_prediction_q.<locals>.<lambda> at 0x00000200DDE534C8>, {(15, 8, False): array([-0.44730679, -0.56450241]), (20, 8, False): array([ 0.78725962, -0.85835258]), (18, 5, False): array([ 0.21584386, -0.66824381]), (21, 5, False): array([ 0.90630551, -1. ]), (12, 10, False): array([-0.56952491, -0.55876214]), (19, 10, False): array([-0.02492958, -0.78681196]), (7, 5, False): array([-0.15841584, -0.39657444]), (17, 5, False): array([-0.01026226, -0.65179063]), (20, 5, False): array([ 0.66351188, -0.89220564]), (10, 5, False): array([-0.12790698, -0.07449051]), (13, 5, False): array([-0.15111111, -0.47101033]), (18, 10, False): array([-0.26137281, -0.73531012]), (12, 2, False): array([-0.35336538, -0.47929172]), (20, 2, False): array([ 0.65767045, -0.88862745]), (19, 9, True): array([ 0.28471002, -0.16312057]), (19, 8, False): array([ 0.57891722, -0.75193798]), (18, 2, False): array([ 0.15039282, -0.67266685]), (16, 1, False): array([-0.74235808, -0.72148024]), (8, 5, False): array([-0.20833333, -0.35699374]), (15, 5, False): array([-0.1465798 , -0.58190149]), (15, 1, False): array([-0.77852349, -0.68268956]), (16, 7, False): array([-0.48977395, -0.57885763]), (6, 9, False): array([-0.47826087, -0.47931034]), (16, 9, False): array([-0.56170213, -0.61964039]), (16, 4, False): array([-0.2167382 , -0.62359712]), (4, 10, False): array([-0.57831325, -0.49027237]), (7, 10, False): array([-0.56521739, -0.53173298]), (9, 3, False): array([-0.30246914, -0.18943089]), (19, 3, False): array([ 0.4398444 , -0.74692049]), (12, 1, False): array([-0.73393461, -0.61994766]), (13, 10, False): array([-0.59015001, -0.58409635]), (21, 10, False): array([ 0.89159922, -1. ]), (13, 6, False): array([-0.18300654, -0.49130555]), (16, 10, True): array([-0.57667387, -0.38493506]), (13, 1, False): array([-0.80324074, -0.66094658]), (17, 1, False): array([-0.62639821, -0.74895164]), (16, 10, False): array([-0.55822198, -0.67048071]), (14, 10, False): array([-0.55081967, -0.61949861]), (20, 10, False): array([ 0.43305984, -0.89183395]), (9, 10, False): array([-0.6187399 , -0.35907577]), (13, 5, True): array([-0.14851485, -0.07317073]), (12, 3, False): array([-0.28222997, -0.45890212]), (8, 6, False): array([-0.20491803, -0.35902851]), (15, 6, False): array([-0.18526544, -0.55332217]), (17, 9, True): array([-0.36153846, -0.29811321]), (14, 9, False): array([-0.54809843, -0.56105702]), (21, 9, False): array([ 0.93218807, -1. ]), (8, 9, False): array([-0.54471545, -0.4522293 ]), (18, 9, False): array([-0.19550562, -0.68890119]), (13, 9, False): array([-0.56114286, -0.52494395]), (17, 9, False): array([-0.43972445, -0.66785128]), (6, 10, False): array([-0.56981132, -0.52553763]), (17, 3, False): array([-0.13400901, -0.63636364]), (18, 3, False): array([ 0.19002123, -0.68595041]), (13, 3, False): array([-0.19862227, -0.50287687]), (16, 5, False): array([-0.2 , -0.62811791]), (14, 1, False): array([-0.77319588, -0.68935927]), (16, 8, False): array([-0.52155172, -0.60798898]), (16, 2, False): array([-0.2568306 , -0.62763916]), (15, 10, False): array([-0.56239692, -0.64728763]), (11, 10, False): array([-0.59702347, -0.18116474]), (6, 3, False): array([-0.28985507, -0.41166381]), (20, 2, True): array([ 0.59811617, -0.17808219]), (21, 10, True): array([ 0.89116038, -0.20681687]), (17, 8, True): array([-0.41025641, -0.16064257]), (14, 8, False): array([-0.49315068, -0.52255319]), (14, 6, True): array([-0.30120482, -0.16062176]), (14, 6, False): array([-0.1561086 , -0.54506923]), (15, 9, True): array([-0.58181818, -0.27692308]), (12, 9, False): array([-0.50877193, -0.50469208]), (14, 3, False): array([-0.25443787, -0.57263991]), (20, 3, False): array([ 0.65171451, -0.88314785]), (18, 7, False): array([ 0.41361257, -0.65881397]), (11, 1, False): array([-0.7535545 , -0.31204819]), (19, 9, False): array([ 0.30531589, -0.79614949]), (9, 9, False): array([-0.55956679, -0.24200164]), (15, 4, False): array([-0.13621262, -0.56433978]), (21, 2, True): array([0.85502959, 0. ]), (15, 2, False): array([-0.28899083, -0.57855717]), (8, 10, False): array([-0.56913828, -0.52190623]), (21, 3, True): array([ 0.88400901, -0.175 ]), (10, 10, False): array([-0.58896151, -0.25256849]), (19, 5, False): array([ 0.43415179, -0.76490066]), (20, 1, False): array([ 0.16683042, -0.90056589]), (13, 8, True): array([-0.51020408, -0.18130312]), (12, 8, False): array([-0.50925926, -0.4638323 ]), (18, 8, False): array([ 0.10774411, -0.68352642]), (11, 5, False): array([-0.25064599, -0.05666857]), (12, 5, False): array([-0.18202765, -0.45013239]), (21, 1, True): array([ 0.63100686, -0.35034803]), (17, 10, False): array([-0.44747613, -0.71094779]), (10, 2, False): array([-0.17746479, -0.13721414]), (13, 2, False): array([-0.26507395, -0.513545 ]), (21, 6, False): array([ 0.89323035, -1. ]), (4, 3, False): array([-0.05882353, -0.4695122 ]), (17, 2, False): array([-0.13363029, -0.64186804]), (20, 6, False): array([ 0.70974743, -0.87660863]), (9, 2, False): array([-0.26213592, -0.16680429]), (14, 2, False): array([-0.32671082, -0.55620642]), (19, 2, False): array([ 0.3720099 , -0.77637615]), (13, 6, True): array([-0.12087912, -0.14921466]), (21, 6, True): array([ 0.90857467, -0.00446429]), (10, 9, False): array([-0.55263158, -0.09791667]), (15, 10, True): array([-0.54989384, -0.34817814]), (12, 7, False): array([-0.46282974, -0.42440318]), (8, 4, False): array([-0.0990991 , -0.34846989]), (18, 4, False): array([ 0.20697413, -0.6894393 ]), (11, 9, False): array([-0.55949367, -0.08217593]), (15, 9, False): array([-0.56623932, -0.59211284]), (21, 1, False): array([ 0.63581952, -1. ]), (11, 7, False): array([-0.47368421, -0.06752789]), (15, 7, False): array([-0.45819398, -0.56425983]), (17, 7, False): array([-0.10972851, -0.62171871]), (20, 7, False): array([ 0.7704 , -0.89486964]), (21, 5, True): array([ 0.90178571, -0.12026726]), (20, 9, False): array([ 0.75595119, -0.89578361]), (20, 4, True): array([ 0.63857374, -0.18791946]), (18, 1, False): array([-0.37867247, -0.77873884]), (14, 4, False): array([-0.21205098, -0.53594587]), (12, 6, False): array([-0.17343173, -0.42756084]), (16, 6, False): array([-0.15198238, -0.59006897]), (19, 6, True): array([ 0.48141593, -0.05747126]), (11, 2, False): array([-0.26754386, -0.10151692]), (18, 3, True): array([ 0.02380952, -0.25892857]), (15, 3, False): array([-0.31140351, -0.57871148]), (15, 2, True): array([-0.27131783, -0.23877069]), (18, 2, True): array([ 0.088 , -0.25225225]), (20, 7, True): array([0.78504673, 0.05035971]), (5, 4, False): array([-0.33944954, -0.32241814]), (10, 7, False): array([-0.46062053, -0.05006954]), (13, 2, True): array([-0.17777778, -0.29120879]), (4, 6, False): array([-0.30232558, -0.36627907]), (19, 7, False): array([ 0.61601982, -0.78562874]), (21, 3, False): array([ 0.88930582, -1. ]), (15, 7, True): array([-0.56521739, -0.13530655]), (17, 7, True): array([-0.09933775, -0.19731801]), (11, 3, False): array([-0.150358 , -0.04809976]), (6, 7, False): array([-0.4057971 , -0.34074074]), (20, 10, True): array([ 0.43396977, -0.23861852]), (13, 8, False): array([-0.52811736, -0.50087209]), (17, 4, False): array([-0.07212056, -0.65173572]), (19, 4, False): array([ 0.41251057, -0.7755102 ]), (10, 8, False): array([-0.49438202, -0.07676903]), (10, 6, False): array([-0.23783784, -0.04005253]), (17, 10, True): array([-0.44720497, -0.42372093]), (12, 1, True): array([-0.76923077, -0.46354167]), (14, 7, False): array([-0.48717949, -0.50696767]), (16, 2, True): array([-0.1969697 , -0.19672131]), (5, 10, False): array([-0.61538462, -0.50543478]), (9, 8, False): array([-0.41979522, -0.13492741]), (10, 1, False): array([-0.77377892, -0.35486111]), (5, 3, False): array([-0.09756098, -0.40896359]), (12, 4, False): array([-0.20374707, -0.4675912 ]), (14, 5, False): array([-0.17226436, -0.50533911]), (20, 4, False): array([ 0.65364478, -0.90378549]), (17, 6, False): array([ 0.00995575, -0.62257697]), (13, 3, True): array([-0.35849057, -0.26878613]), (21, 7, False): array([ 0.92687471, -1. ]), (18, 1, True): array([-0.46969697, -0.5045045 ]), (13, 4, False): array([-0.20979021, -0.50189892]), (8, 3, False): array([-0.2231405, -0.3382643]), (19, 3, True): array([ 0.41419142, -0.07878788]), (11, 8, False): array([-0.57446809, -0.08444187]), (17, 8, False): array([-0.36333699, -0.62194116]), (20, 9, True): array([ 0.75671141, -0.21192053]), (18, 5, True): array([ 0.0859375 , -0.26056338]), (19, 5, True): array([ 0.43338684, -0.17482517]), (19, 6, False): array([ 0.49889381, -0.82379863]), (15, 3, True): array([-0.32673267, -0.13868613]), (8, 1, False): array([-0.73228346, -0.63368421]), (6, 8, False): array([-0.46268657, -0.41563055]), (9, 5, False): array([-0.25949367, -0.13288478]), (10, 4, False): array([-0.20903955, -0.06577086]), (21, 9, True): array([ 0.93073841, -0.11244019]), (8, 8, False): array([-0.56175299, -0.35363458]), (7, 7, False): array([-0.55670103, -0.38636364]), (13, 7, False): array([-0.47540984, -0.50212887]), (5, 7, False): array([-0.40425532, -0.37575758]), (18, 10, True): array([-0.22018349, -0.4152431 ]), (15, 1, True): array([-0.66037736, -0.46741573]), (9, 6, False): array([-0.19384615, -0.12139219]), (21, 8, True): array([ 0.93612079, -0.09405941]), (19, 1, False): array([-0.11815068, -0.81590909]), (8, 2, False): array([-0.24324324, -0.32976654]), (16, 3, False): array([-0.23582766, -0.61100251]), (18, 6, False): array([ 0.28229167, -0.68207127]), (21, 7, True): array([ 0.92067308, -0.14320388]), (13, 4, True): array([-0.29824561, -0.167979 ]), (21, 4, True): array([ 0.88577955, -0.09647059]), (13, 10, True): array([-0.57219251, -0.32985658]), (19, 2, True): array([ 0.41352201, -0.10759494]), (14, 10, True): array([-0.61904762, -0.33611442]), (7, 6, False): array([-0.16022099, -0.36734694]), (17, 5, True): array([-0.16901408, -0.2016129 ]), (21, 2, False): array([ 0.8852914, -1. ]), (21, 8, False): array([ 0.92784459, -1. ]), (7, 3, False): array([-0.14141414, -0.3502907 ]), (6, 2, False): array([-0.0625 , -0.4470377]), (18, 6, True): array([ 0.33757962, -0.19349005]), (13, 1, True): array([-0.86 , -0.39835165]), (14, 9, True): array([-0.48076923, -0.24874372]), (9, 7, False): array([-0.46753247, -0.0902439 ]), (9, 4, False): array([-0.2211838 , -0.20164609]), (10, 3, False): array([-0.17784257, -0.11309116]), (18, 7, True): array([ 0.44604317, -0.25478927]), (14, 7, True): array([-0.51020408, -0.14611872]), (16, 7, True): array([-0.3877551 , -0.18032787]), (19, 4, True): array([ 0.46909667, -0.14666667]), (17, 3, True): array([-0.11764706, -0.25206612]), (4, 9, False): array([-0.55555556, -0.5 ]), (7, 9, False): array([-0.59162304, -0.50710227]), (6, 4, False): array([-0.10294118, -0.39162113]), (7, 8, False): array([-0.53703704, -0.42740841]), (11, 4, False): array([-0.15086207, -0.05357143]), (21, 4, False): array([ 0.89444699, -1. ]), (19, 10, True): array([-0.02707205, -0.33179724]), (19, 8, True): array([0.59756098, 0.09285714]), (14, 3, True): array([-0.37815126, -0.12972973]), (17, 4, True): array([-0.08333333, -0.27010309]), (20, 1, True): array([ 0.15189873, -0.39181287]), (20, 3, True): array([0.63344595, 0.05844156]), (7, 2, False): array([-0.34343434, -0.44042838]), (16, 6, True): array([-0.06666667, -0.16808511]), (20, 8, True): array([ 0.80379747, -0.01481481]), (7, 1, False): array([-0.78680203, -0.64504284]), (19, 7, True): array([0.64705882, 0.02597403]), (12, 5, True): array([-0.18181818, -0.13333333]), (14, 5, True): array([-0.23255814, -0.12834225]), (19, 1, True): array([-0.16899225, -0.35606061]), (16, 4, True): array([-0.23478261, -0.24568966]), (5, 6, False): array([-0.14942529, -0.24581006]), (11, 6, False): array([-0.09352518, -0.07692308]), (15, 6, True): array([-0.23728814, -0.26 ]), (9, 1, False): array([-0.78056426, -0.52768456]), (12, 2, True): array([-0.52173913, -0.08928571]), (15, 5, True): array([-0.16363636, -0.16210046]), (18, 9, True): array([-0.25641026, -0.30535714]), (4, 8, False): array([-0.53191489, -0.32984293]), (15, 8, True): array([-0.37864078, -0.15503876]), (17, 2, True): array([ 0.0075188 , -0.28349515]), (7, 4, False): array([-0.1396648 , -0.34993084]), (14, 8, True): array([-0.51807229, -0.21052632]), (6, 6, False): array([-0.04761905, -0.3380531 ]), (17, 1, True): array([-0.53781513, -0.53497164]), (18, 4, True): array([ 0.19548872, -0.25266904]), (4, 2, False): array([-0.19148936, -0.38095238]), (15, 4, True): array([-0.27102804, -0.24146341]), (5, 8, False): array([-0.69892473, -0.32207792]), (5, 9, False): array([-0.44761905, -0.44109589]), (12, 10, True): array([-0.63366337, -0.23857868]), (5, 5, False): array([-0.14606742, -0.35602094]), (13, 9, True): array([-0.70833333, -0.21108179]), (14, 2, True): array([-0.20408163, -0.16193182]), (20, 6, True): array([ 0.72163389, -0.03144654]), (18, 8, True): array([ 0.10738255, -0.23836127]), (6, 1, False): array([-0.83216783, -0.59099437]), (12, 6, True): array([-0.02040816, -0.06896552]), (16, 8, True): array([-0.44 , -0.19827586]), (5, 1, False): array([-0.68888889, -0.61388889]), (8, 7, False): array([-0.50607287, -0.3296371 ]), (16, 5, True): array([-0.07272727, -0.20874751]), (14, 1, True): array([-0.74358974, -0.46073298]), (13, 7, True): array([-0.56989247, -0.09340659]), (12, 4, True): array([-0.4 , -0.12571429]), (16, 1, True): array([-0.7810219 , -0.52723312]), (16, 3, True): array([-0.26315789, -0.21846847]), (17, 6, True): array([-0.02597403, -0.15667311]), (6, 5, False): array([-0.15 , -0.31688805]), (20, 5, True): array([ 0.65740741, -0.05594406]), (12, 3, True): array([-0.43478261, -0.2259887 ]), (14, 4, True): array([-0.20720721, -0.12135922]), (4, 5, False): array([-0.43478261, -0.35858586]), (16, 9, True): array([-0.49206349, -0.28834356]), (5, 2, False): array([-0.22772277, -0.34834835]), (4, 7, False): array([-0.33333333, -0.26162791]), (12, 9, True): array([-0.71428571, -0.15577889]), (12, 7, True): array([-0.46666667, -0.10326087]), (12, 8, True): array([-0.51020408, -0.09411765]), (4, 1, False): array([-0.74074074, -0.65445026]), (4, 4, False): array([-0.27659574, -0.4695122 ])})
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, .02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
env.observation_space.contains()
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
action = env.action_space.sample()
print("action taken", action)
state, reward, done, info = env.step(action)
print("state", state)
print("reward", reward)
print("done", done)
print("info", info)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
[0.8, 0.2] if 0 > 18 else [0.2, 0.8]
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
A = [((18, 10, True), 1, 0.0), ((15, 10, False), 1, 0.0), ((17, 10, False), 1, -1.0)]
for a, b, c in A:
print(a, b, c)
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `Q` keys are states `s` . This means `Q[(state, state, state)] = [action_0_long_term_reward, action_1_long_term_reward]`
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode_s_a_r_sequence = generate_episode(env)
episode_state_sequence, episode_action_sequence, episode_reward_sequence = zip(*episode_s_a_r_sequence)
discount_compound = np.array([gamma**i for i in range(len(episode_reward_sequence)+1)])
for i, state in enumerate(episode_state_sequence):
returns_sum[state][episode_action_sequence[i]] += sum(episode_reward_sequence[i:]*discount_compound[:-(i+1)])
N[state][episode_action_sequence[i]] += 1.0
Q[state][episode_action_sequence[i]] = returns_sum[state][episode_action_sequence[i]]/N[state][episode_action_sequence[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_action_prob(state, Q, epsilon):
if state in Q.items():
probs = [epsilon, epsilon]
best_action = np.argmax(Q[state])
probs[best_action] = 1 - epsilon
else:
probs = [0.5, 0.5]
return probs
def generate_episode(bj_env, Q, epsilon):
episode = []
state = bj_env.reset()
while True:
probs = generate_action_prob(state, Q, epsilon)
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def policy_extractor(Q):
# default policy if i have not visited the state is 0 ?
policy = defaultdict(lambda: np.zeros(1))
for state, action_value in Q.items():
policy[state] = np.argmax(action_value)
return policy
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epsilon = max(epsilon*eps_decay, eps_min)
episode_s_a_r_sequence = generate_episode(env, Q, epsilon)
episode_state_sequence, episode_action_sequence, episode_reward_sequence = zip(*episode_s_a_r_sequence)
discount_compound = np.array([gamma**i for i in range(len(episode_reward_sequence)+1)])
for i, state in enumerate(episode_state_sequence):
state_discounted_reward = sum(episode_reward_sequence[i:]*discount_compound[:-(i+1)])
state_action_error = state_discounted_reward - Q[state][episode_action_sequence[i]]
Q[state][episode_action_sequence[i]] += alpha*state_action_error
policy = policy_extractor(Q)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, num_episodes = 500000, alpha =0.02, gamma=1.0)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
import sys
print(sys.version)
###Output
3.5.2 (default, Nov 12 2018, 13:43:14)
[GCC 5.4.0 20160609]
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
print(env.action_space)
###Output
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
action = env.action_space.sample()
print(state, action)
state, reward, done, info = env.step(action)
print(state)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(4, 6, False) 0
(4, 6, False)
End game! Reward: -1.0
You lost :(
(16, 2, False) 1
(20, 2, False)
(20, 2, False) 1
(30, 2, False)
End game! Reward: -1
You lost :(
(13, 1, False) 0
(13, 1, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((12, 7, False), 1, 0), ((13, 7, False), 1, 0), ((19, 7, False), 0, -1.0)]
[((21, 10, True), 0, 1.0)]
[((19, 6, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
N = defaultdict(lambda: np.zeros(env.action_space.n))
print(N)
import time
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
start=time.time()
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# generate an episode
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
#discounts = np.array([gamma**i for i in range(len(episode))])
reward = episode[-1][-1]
#print(episode, len(episode), reward)
for i, state in enumerate(states):
action = actions[i]
g = gamma**(len(episode)-1-i)*reward
#g = sum(discounts[:len(states)-i]*rewards[i:])
returns_sum[state][action] += g
N[state][action]+= 1
Q[state][action]= returns_sum[state][action]/N[state][action]
print("elapsed:", time.time()-start)
return Q
Q = mc_prediction_q(env, 1, generate_episode_from_limit_stochastic)
###Output
elapsed: 0.00030732154846191406
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.elapsed: 41.55307722091675
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def get_prob(Q_state, epsilon):
probs = epsilon*np.ones_like(Q_state)/len(Q_state)
probs[np.argmax(probs)]+=1-epsilon
return probs
#get_prob([40, 2], 0.1)
def generate_episode_epsilon_greedy(env, Q, epsilon):
episode = []
state = env.reset()
nA = env.action_space.n
while True:
# get probability
if state in Q:
probs = get_prob(Q[state], epsilon)
else:
probs = np.ones_like(Q[state])/nA
action = np.random.choice(np.arange(nA), p=probs)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(env, Q, episode, gamma, alpha):
states, actions, rewards = zip(*episode)
#discounts = np.array([gamma**i for i in range(len(episode))])
reward = episode[-1][-1]
#print(episode, len(episode), reward)
for i, state in enumerate(states):
action = actions[i]
g = gamma**(len(episode)-1-i)*reward
#g = sum(discounts[:len(states)-i]*rewards[i:])
Q[state][action] += alpha*(g-Q[state][action])
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon_start=1.0, epsilon_decay=0.999999, epsilon_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
epsilon=epsilon_start
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{} Epsilon {}.".format(i_episode, num_episodes, epsilon), end="")
sys.stdout.flush()
## TODO: complete the function
# generate episode using epsilon-greedy
epsilon = max(epsilon*epsilon_decay, epsilon_min)
episode = generate_episode_epsilon_greedy(env, Q, epsilon)
# update Q using constant alpha
Q = update_Q(env, Q, episode, gamma, alpha)
policy = dict((k,np.argmax(v)) for k,v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 1000000, 0.1)
###Output
Episode 1000000/1000000 Epsilon 0.05.26065815754.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
The **true** optimal policy $\pi_*$ can be found in Figure 5.2 of the [textbook](http://go.udacity.com/rl-textbook) (and appears below). Compare your final estimate to the optimal policy - how close are you able to get? If you are not happy with the performance of your algorithm, take the time to tweak the decay rate of $\epsilon$, change the value of $\alpha$, and/or run the algorithm for more episodes to attain better results.![True Optimal Policy](images/optimal.png)
###Code
for k, v in policy.items():
if k[2]:
print(k,v)
###Output
(17, 8, True) 1
(12, 10, True) 1
(12, 1, True) 1
(21, 3, True) 0
(12, 5, True) 1
(14, 2, True) 1
(14, 1, True) 1
(16, 4, True) 1
(16, 7, True) 1
(14, 6, True) 1
(16, 1, True) 1
(20, 9, True) 0
(16, 8, True) 1
(18, 4, True) 0
(20, 7, True) 0
(15, 8, True) 1
(18, 10, True) 0
(20, 5, True) 0
(13, 5, True) 1
(15, 2, True) 1
(15, 3, True) 0
(13, 8, True) 1
(15, 4, True) 1
(17, 1, True) 1
(19, 5, True) 0
(19, 6, True) 0
(21, 9, True) 0
(14, 8, True) 1
(15, 1, True) 1
(14, 5, True) 1
(17, 10, True) 1
(16, 2, True) 1
(12, 8, True) 1
(21, 5, True) 0
(21, 2, True) 0
(14, 3, True) 1
(21, 4, True) 0
(15, 7, True) 0
(21, 6, True) 0
(18, 6, True) 1
(18, 5, True) 1
(20, 6, True) 0
(18, 9, True) 1
(20, 1, True) 0
(13, 1, True) 1
(15, 6, True) 1
(13, 7, True) 1
(20, 10, True) 0
(17, 4, True) 1
(17, 5, True) 1
(19, 9, True) 1
(19, 10, True) 1
(13, 10, True) 1
(21, 10, True) 0
(17, 3, True) 1
(19, 3, True) 0
(19, 4, True) 0
(21, 8, True) 0
(17, 6, True) 1
(17, 9, True) 1
(12, 4, True) 1
(21, 1, True) 0
(12, 7, True) 1
(14, 10, True) 1
(14, 7, True) 1
(21, 7, True) 0
(12, 9, True) 1
(20, 8, True) 0
(13, 2, True) 1
(16, 10, True) 1
(18, 2, True) 1
(18, 1, True) 1
(20, 2, True) 0
(16, 5, True) 1
(12, 2, True) 1
(17, 2, True) 0
(15, 9, True) 1
(15, 10, True) 1
(18, 8, True) 0
(18, 7, True) 0
(20, 4, True) 0
(13, 3, True) 1
(14, 4, True) 0
(13, 9, True) 1
(18, 3, True) 1
(13, 6, True) 1
(15, 5, True) 1
(16, 9, True) 1
(17, 7, True) 1
(14, 9, True) 1
(19, 7, True) 0
(19, 8, True) 0
(16, 3, True) 1
(16, 6, True) 0
(13, 4, True) 0
(20, 3, True) 0
(19, 1, True) 0
(19, 2, True) 0
(12, 3, True) 1
(12, 6, True) 1
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v1')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(4, 10, False)
(9, 10, False)
End game! Reward: -1.0
You lost :(
(21, 2, True)
End game! Reward: 1.0
You won :)
(12, 10, True)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((7, 4, False), 0, -1.0)]
[((9, 5, False), 1, 0.0), ((19, 5, False), 1, -1.0)]
[((17, 7, False), 1, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode=[]
state=env.reset()
while True:
action = np.random.choice(np.arange(2), p=[0.5, 0.5])
next_state, reward, done, info=env.step(action)
episode.append((state, action, reward))
state=next_state
if done:
break
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]]+=1.0
Q[state][actions[i]] = returns_sum[state][actions[i]]/N[state][actions[i]]
return Q
Q = mc_prediction_q(env, 1000, generate_episode_from_limit_stochastic)
###Output
Episode 1000/1000.
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episodes=[]
state=env.reset()
while True:
action = np.random.choice(np.arange(2), p=[0.8, 0.2] if np.argmax(Q[state])==0 else [0.2, 0.8])
next_state, reward, done, info=env.step(action)
episodes.append((state, action, reward))
state=next_state
if done:
break
states, actions, rewards = zip(*episodes)
discounts=np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
Q[state][actions[i]]+= alpha*(sum(rewards[i:]*discounts[:-(1+i)])-Q[state][actions[i]])
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.03)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Here is a picture for visualizing observeration spaces w.r.t the game.(21+?, dealer (11 or less), usable ace)![black_jack](images/blackjack.png)Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(10, 10, False)
End game! Reward: -1.0
You lost :(
(18, 6, False)
(19, 6, False)
End game! Reward: -1
You lost :(
(20, 6, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(5):
episodes = generate_episode_from_limit_stochastic(env)
print('\n#{}: {} episodes long'.format(i+1, len(episodes)))
for j, state in enumerate(reversed(episodes)):
print(state)
###Output
#1: 2 episodes long
((19, 10, False), 1, -1)
((12, 10, False), 1, 0)
#2: 2 episodes long
((16, 1, False), 1, -1)
((12, 1, False), 1, 0)
#3: 1 episodes long
((14, 10, False), 1, -1)
#4: 1 episodes long
((20, 7, False), 0, 1.0)
#5: 2 episodes long
((16, 8, False), 1, -1)
((6, 8, False), 1, 0)
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
eps = generate_episode_from_limit_stochastic(env)
for i, ep in enumerate(eps):
print(ep[0], ep[1], ep[2])
print(i, ep)
state, action, reward = zip(*eps)
print(state, action, reward)
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0, debug=False):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
Z = defaultdict(lambda: np.zeros(env.action_space.n))
if debug:
num_episodes=2
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episodes = generate_episode(env)
G = 0
for t, observation in enumerate(reversed(episodes)):
state = observation[0]
action = observation[1]
reward = observation[2]
G = gamma*G + reward
if debug:
print('state:{}, action:{}, reward:{}, G: {}'.
format(state, action, reward, G))
returns_sum[state][action] += G
N[state][action] += 1
# for all states and actions
for state in N:
for a in range(env.action_space.n):
if N[state][a] >= 0:
Q[state][a] = returns_sum[state][a] / N[state][a]
if debug:
print('Q: {}'.format(Q[state]))
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic, 0.9)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
# If step is seen for the first time
# iterate from t-1 to 0
def seen_first_time(t, episode):
return True
def apply_policy(Q_state, epsilon, nA):
action = np.argmax(Q_state)
policy = (np.ones(nA)*epsilon)/nA
policy[action] = 1 - epsilon + (epsilon/nA)
return policy
def generate_episodes_with_epsilon_greedy(env, Q, epsilon = 0.2):
episode = []
state = env.reset()
nA = env.action_space.n
while True:
if state in Q:
policy_state = apply_policy(Q[state], epsilon, nA)
action = np.random.choice(nA, p=policy_state)
else:
action = env.action_space.sample()
state_next, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = state_next
if done:
break
return episode
def init(nA):
Q = defaultdict(lambda: np.zeros(nA))
N = defaultdict(lambda: np.zeros(nA))
policy = None
return_sum = defaultdict(lambda: np.zeros(nA))
eps, eps_decay, eps_min = 1.0, 0.99999, 0.10
return Q, N, policy, return_sum, eps, eps_decay, eps_min
def parse(step):
return step[0], step[1], step[2]
def update_Q(Q, state, action, future_reward, alpha):
Q[state][action] += alpha*(future_reward - Q[state][action])
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon=0.1):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q, N, policy, return_sum, epsilon, eps_decay, eps_min = init(nA)
# loop over episodes
for i_episode in range(1, num_episodes+1):
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epsilon = max(epsilon*eps_decay, eps_min)
episode = generate_episodes_with_epsilon_greedy(env, Q, epsilon)
future_reward = 0
# iterate in reverse
for t, step in enumerate(reversed(episode)):
state, action, reward = parse(step)
# reward = reward_t + discounted_future_reward
future_reward = gamma*future_reward + reward
if seen_first_time(t, episode):
Q = update_Q(Q, state, action, future_reward, alpha)
policy = dict((k, np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
alpha = [0.01*i for i in range(1,11) if i%2==0]
#alpha = [0.03]
policy_a = {}
Q_a = {}
for a in alpha:
print('alpha: {}'.format(a))
policy, Q = mc_control(env, 500000, a)
policy_a[a] = policy
Q_a[a] = Q
###Output
alpha: 0.02
Episode 500000/500000.alpha: 0.04
Episode 500000/500000.alpha: 0.06
Episode 500000/500000.alpha: 0.08
Episode 500000/500000.alpha: 0.1
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
a = alpha[0]
V = dict((k,np.max(v)) for k, v in Q_a[a].items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
for k, v in policy_a.items():
print('alpha: {}'.format(k))
plot_policy(v)
###Output
alpha: 0.02
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
[2018-07-23 13:11:05,537] Making new env: Blackjack-v0
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 1, False)
End game! Reward: 1.0
You won :)
(19, 10, False)
End game! Reward: -1
You lost :(
(12, 9, False)
(14, 9, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(21, 10, True)
End game! Reward: 1.0
You won :)
(12, 10, False)
(17, 10, False)
End game! Reward: -1
You lost :(
(10, 2, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((15, 2, False), 0, -1.0)]
[((7, 8, False), 1, 0), ((14, 8, False), 1, -1)]
[((9, 10, False), 1, 0), ((11, 10, False), 1, 0), ((21, 10, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
N = defaultdict(lambda: np.zeros(env.action_space.n))
if not N[0][0]:
N[0][0] = 1.
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
visit = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode_from_limit_stochastic(env)
for s, a, r in episode:
# first-visit MC prediction
if not visit[s][a]:
returns_sum[s][a] += r
N[s][a] += 1
visit[s][a] = 1
visit.clear() # clear the visits in current apisode
Q = {key: returns_sum[key]/N[key] for key in returns_sum.keys()}
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, eps_decay=.9999965, gamma=1.0):
nA = env.action_space.n
epsilon = 1.
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
visit = defaultdict(lambda: np.zeros(nA))
policy = defaultdict(lambda: 0)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{} eps: {}.".format(i_episode, num_episodes, epsilon), end="")
sys.stdout.flush()
## TODO: complete the function
episode = []
state = env.reset() # get initial state
while True:
# decide on next action for the current state
exploitation = np.random.choice(np.arange(2), p=[epsilon, 1-epsilon])
if exploitation:
# exploitation (choice of action based on greedy choice in Q table)
action = policy[state]
else:
# exploitation (random choice of action)
action = env.action_space.sample()
# push the action and get the environment response (new state and reward)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
if done:
break
state = next_state
# update the value of the Q table based on the rewards
for i, (s, a, r) in enumerate(episode):
# first-visit MC prediction
if not visit[s][a]:
Q[s][a] += alpha*(gamma**i*r - Q[s][a])
visit[s][a] = 1
#update policy
policy[state] = np.argmax(Q[s])
visit.clear() # clear the visits in current apisode
# TODO: update epsilon
epsilon *= eps_decay
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 1000000, 0.01, gamma=1.0)
###Output
Episode 1000000/1000000 eps: 0.030197304152685518.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
import itertools
from copy import deepcopy
from tqdm import tqdm
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 10, False)
(21, 10, False)
End game! Reward: 1.0
You won :)
(18, 8, False)
(19, 8, False)
End game! Reward: 1.0
You won :)
(13, 10, False)
(21, 10, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((9, 2, False), 1, 0.0), ((19, 2, False), 0, 1.0)]
[((5, 9, False), 1, 0.0), ((15, 9, False), 1, -1.0)]
[((14, 10, False), 1, 0.0), ((17, 10, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def calc_return(episode, gamma=1):
"""
Episodes come in as (state, action, reward) tuples
"""
n_periods = len(episode)
rewards = np.array([reward for _, _, reward in episode])
discounts = np.array([gamma**t for t in range(n_periods)])
discounted_reward = rewards * discounts
return discounted_reward[::-1].cumsum()
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
returns = calc_return(episode, gamma)
visited_state_actions = set()
for t, (state, action, reward) in enumerate(episode):
if (state, action) not in visited_state_actions:
visited_state_actions.add((state, action))
N[state][action] += 1
returns_sum[state][action] += returns[t]
states = set(returns_sum.keys())
actions = {0,1}
import itertools
for state, action in itertools.product(states, actions):
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def get_all_states():
hand_sums = set(range(32))
dealer_card = set(range(11))
useable_ace = {0,1}
return itertools.product(hand_sums, dealer_card, useable_ace)
def update_policy(Q, policy, state, epsilon, n_actions):
policy[state] = np.full(n_actions, epsilon/n_actions)
best_action = Q[state].argmax()
policy[state][best_action] = 1 - epsilon + (epsilon/n_actions)
return policy
def get_epsilon(epsilon, decay_rate=0.999, min_epsilon=0.05) -> float:
return np.maximum(epsilon*decay_rate, min_epsilon)
def generate_episode(bj_env, Q, policy, epsilon, n_actions):
episode = []
state = bj_env.reset()
while True:
policy = update_policy(Q, policy, state, epsilon, n_actions)
probs = policy[state]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode, policy
def update_Q(Q, state, action, alpha, return_amt):
Q[state][action] += alpha * (return_amt - Q[state][action])
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
policy = defaultdict(lambda: np.full(nA, 1/nA))
epsilon=0.999
# loop over episodes
for i_episode in tqdm(range(1, num_episodes+1)):
# # monitor progress
# if i_episode % 1000 == 0:
# print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
# sys.stdout.flush()
## TODO: complete the function
epsilon = get_epsilon(epsilon)
episode, policy = generate_episode(env, Q, policy, epsilon, nA)
returns = calc_return(episode, gamma)
for t, (state, action, reward) in enumerate(episode):
Q = update_Q(Q, state, action, alpha, returns[t])
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 10_000_000, 0.02)
###Output
69%|██████▊ | 6867835/10000000 [18:44<08:45, 5960.61it/s]
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
import matplotlib.pyplot as plt
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
%matplotlib inline
plt.rcParams['figure.facecolor'] = 'w'
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(5):
state = env.reset()
while True:
print(state, end=' ')
action = env.action_space.sample()
state, reward, done, info = env.step(action)
print(action, ['(stick)', '(hit)'][action])
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 10, False) 0 (stick)
End game! Reward: 1.0
You won :)
(17, 3, False) 1 (hit)
End game! Reward: -1
You lost :(
(16, 4, True) 1 (hit)
(15, 4, False) 0 (stick)
End game! Reward: -1.0
You lost :(
(9, 3, False) 1 (hit)
(12, 3, False) 1 (hit)
(18, 3, False) 1 (hit)
End game! Reward: -1
You lost :(
(10, 2, False) 1 (hit)
(21, 2, True) 1 (hit)
(20, 2, False) 1 (hit)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((16, 2, False), 0, 1.0)]
[((17, 2, False), 0, -1.0)]
[((16, 2, True), 1, 0), ((18, 2, True), 1, 0), ((13, 2, False), 1, 0), ((15, 2, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# get an episode
episode = generate_episode(env)
# prepare discounts
episode_len = len(episode)
if episode_len > longest_episode_len:
longest_episode_len = episode_len
discounts = np.power(gamma, range(longest_episode_len))
# extract and separate states, actions, and rewards
states, actions, rewards = zip(*episode)
# iterate through the episode and accumulate the return sums and the visit counts
visited_state_actions = []
for i, state in enumerate(states):
action = actions[i]
state_action = (state, action)
if state_action not in visited_state_actions:
visited_state_actions.append(state_action)
returns_sum[state][action] += sum(rewards[i:] * discounts[:episode_len - i])
N[state][action] += 1
# update the Q table
for state, action_counts in N.items():
for action in action_counts.nonzero()[0]:
Q[state][action] = returns_sum[state][action] / action_counts[action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 1000000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 1000000/1000000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_epsilon_greedy_policy(env, policy, epsilon):
episode = []
state = env.reset()
nA = env.action_space.n
while True:
probs = np.full(nA, epsilon / nA)
probs[policy[state]] += 1 - epsilon
action = np.random.choice(nA, p=probs)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha=0.1, epsilon=0.1, gamma=1.0):
assert 0 <= gamma <= 1
if not callable(alpha):
alpha = (lambda alpha_val: lambda i_episode: alpha_val)(alpha)
if not callable(epsilon):
assert 0 <= epsilon <= 1
epsilon = (lambda epsilon_val: lambda i_episode: epsilon_val)(epsilon)
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# initialize empty dictionary of actions
policy = defaultdict(lambda: np.random.randint(0, nA))
# loop over episodes
longest_episode_len = 0
total_rewards = 0
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}. (alpha={:.4f}, epsilon={:.4f}, partial_average_rewards={:.4f})"
.format(i_episode, num_episodes, alpha(i_episode), epsilon(i_episode), total_rewards / 1000), end="")
sys.stdout.flush()
total_rewards = 0
# get an episode
episode = generate_episode_from_epsilon_greedy_policy(env, policy, epsilon(i_episode))
# prepare discounts
episode_len = len(episode)
if episode_len > longest_episode_len:
longest_episode_len = episode_len
discounts = np.power(gamma, range(longest_episode_len))
# extract and separate states, actions, and rewards
states, actions, rewards = zip(*episode)
total_rewards += rewards[-1]
# iterate through the episode and update the Q-table and policy
visited_state_actions = []
for i, state in enumerate(states):
action = actions[i]
state_action = (state, action)
if state_action not in visited_state_actions:
visited_state_actions.append(state_action)
Q[state][action] += alpha(i_episode) * (sum(rewards[i:] * discounts[:episode_len - i]) - Q[state][action])
policy[state] = Q[state].argmax()
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
def linearly_decaying_epsilon(num_decaying_episodes, initial_epsilon=1.0, min_epsilon=0.1):
decay_rate = (min_epsilon - initial_epsilon) / num_decaying_episodes
def epsilon_func(i_episode):
if i_episode > num_decaying_episodes:
return min_epsilon
return initial_epsilon + (i_episode - 1) * decay_rate
return epsilon_func
def exponentially_decaying_epsilon(decay_factor=0.999, initial_epsilon=1.0, min_epsilon=0.1):
def epsilon_func(i_episode):
return max(initial_epsilon * (decay_factor ** (i_episode - 1)), min_epsilon)
return epsilon_func
# obtain the estimated optimal policy and action-value function
num_episodes = 500000
policy, Q = mc_control(env, num_episodes, alpha=0.005, epsilon=linearly_decaying_epsilon(int(num_episodes * 0.8), 1.0, 0.05), gamma=1.0)
# policy, Q = mc_control(env, num_episodes, alpha=0.005, epsilon=exponentially_decaying_epsilon(0.99999, 1.0, 0.05), gamma=1.0)
###Output
Episode 500000/500000. (alpha=0.0050, epsilon=0.0500, partial_average_rewards=-0.0690)
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(1):
state = env.reset()
while True:
print(f'state: {state}')
action = env.action_space.sample()
print(f'action{action}')
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
state: (21, 2, True)
action0
End game! Reward: 0.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(1):
episode = generate_episode_from_limit_stochastic(env)
#print(*episode)
states, actions, rewards = zip(*episode)
print(states)
print(actions)
print(rewards)
#print(generate_episode_from_limit_stochastic(env))
###Output
((14, 10, False),)
(1,)
(-1,)
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
# we will record cumulative records for all state, action pairs
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
# N will record number of visits to each state
N = defaultdict(lambda: np.zeros(env.action_space.n))
# Q table where we record average values for each state-action (returns/N)
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# generate episode using a function
episode = generate_episode(env)
# episode returns a tuple of tuples
# one episode (game) is many state-action pairs [HIT, STICK]
states, actions, rewards = zip(*episode)
#discount array
gammas = np.array([gamma**i for i in range(len(rewards) + 1)])
# we update Q table for all state-actions pairs with corresponding reward
for i, state in enumerate(states):
# accumulate all rewards
returns_sum[state][actions[i]] += sum(rewards[i:] * gammas[:-(1+i)])
# divide by visits to get average value
N[state][actions[i]] += 1.0 #use float because we divide by this
# update Q value with new value
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
states = [4,2,3]
rewards = [2,2,2]
discounts = np.array([.9**i for i in range(len(states) +1)])
print(discounts)
for i in range(len(states)):
print(sum(rewards[i:] * discounts[:-(1+i)]))
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 50000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 50000/50000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode(env, nA, Q, epsilon):
# create empty episode
episode = []
# reset env
state = env.reset()
# we are not done, just starting
done = False
# play game until termination
while not done:
# choose action. explore or exploit, based on epsilon value
if state in Q and np.random.random() > epsilon:
# If we have observation for the state, oportunity to exploit
action = np.argmax(Q[state])
else:
# explore otherwise
action = np.random.randint(nA)
# take the action
next_state, reward, done, info = env.step(action)
# record S, A, R for (S, A)
episode.append((state, action, reward))
state = next_state
return episode
def generate_discounts(i, gamma):
return np.array([gamma ** i for i in range(i + 1)])
def update_q_table(env, Q, episode, alpha, discounts):
# updates "every visit"
states, actions, rewards = zip(*episode)
for i in range(len(states)):
# Calculate updated Q value
new_Q = Q[states[i]][actions[i]] + alpha * sum(rewards[i:]* discounts[:-(1+i)])
# Update Q table with new value
Q[states[i]][actions[i]] = new_Q
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon=1.0, eps_decay=0.99999, min_eps=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print(f"\rEpisode {i_episode}/{num_episodes}. Epsilon:{epsilon}","")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env, nA, Q, epsilon)
# calculate new epsilon value
epsilon = max(epsilon*eps_decay, min_eps)
# calculate discounts
discounts = generate_discounts(len(episode), gamma)
# update Q value using epsilon and discounts
Q = update_q_table(env, Q, episode, alpha, discounts)
Q2 = update_q_table2(env, Q, episode, alpha, discounts)
return Q, Q2
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
Q, Q2 = mc_control(env, num_episodes=500000, alpha=0.02)
policy = dict((k, np.argmax(v)) for k, v in Q.items())
###Output
Episode 1000/500000. Epsilon:0.9900596848432421
Episode 2000/500000. Epsilon:0.9802083773701007
Episode 3000/500000. Epsilon:0.9704550925317743
Episode 4000/500000. Epsilon:0.960798854981895
Episode 5000/500000. Epsilon:0.9512386990790049
Episode 6000/500000. Epsilon:0.9417736687899882
Episode 7000/500000. Epsilon:0.9324028175944599
Episode 8000/500000. Epsilon:0.923125208390123
Episode 9000/500000. Epsilon:0.9139399133990471
Episode 10000/500000. Epsilon:0.9048460140748944
Episode 11000/500000. Epsilon:0.8958426010110587
Episode 12000/500000. Epsilon:0.8869287738497265
Episode 13000/500000. Epsilon:0.8781036411918409
Episode 14000/500000. Epsilon:0.8693663205079526
Episode 15000/500000. Epsilon:0.8607159380499775
Episode 16000/500000. Epsilon:0.8521516287638128
Episode 17000/500000. Epsilon:0.8436725362028248
Episode 18000/500000. Epsilon:0.8352778124422134
Episode 19000/500000. Epsilon:0.8269666179942117
Episode 20000/500000. Epsilon:0.8187381217241406
Episode 21000/500000. Epsilon:0.8105915007672816
Episode 22000/500000. Epsilon:0.8025259404466071
Episode 23000/500000. Epsilon:0.7945406341912978
Episode 24000/500000. Epsilon:0.7866347834560875
Episode 25000/500000. Epsilon:0.7788075976414082
Episode 26000/500000. Epsilon:0.7710582940143271
Episode 27000/500000. Epsilon:0.7633860976302757
Episode 28000/500000. Epsilon:0.75579024125555
Episode 29000/500000. Epsilon:0.7482699652905868
Episode 30000/500000. Epsilon:0.7408245176940017
Episode 31000/500000. Epsilon:0.7334531539073835
Episode 32000/500000. Epsilon:0.7261551367808422
Episode 33000/500000. Epsilon:0.7189297364992804
Episode 34000/500000. Epsilon:0.7117762305094288
Episode 35000/500000. Epsilon:0.7046939034475715
Episode 36000/500000. Epsilon:0.6976820470680167
Episode 37000/500000. Epsilon:0.6907399601722711
Episode 38000/500000. Epsilon:0.683866948538918
Episode 39000/500000. Epsilon:0.6770623248541948
Episode 40000/500000. Epsilon:0.6703254086432576
Episode 41000/500000. Epsilon:0.6636555262021306
Episode 42000/500000. Epsilon:0.6570520105303457
Episode 43000/500000. Epsilon:0.6505142012642271
Episode 44000/500000. Epsilon:0.6440414446108634
Episode 45000/500000. Epsilon:0.6376330932827189
Episode 46000/500000. Epsilon:0.6312885064329152
Episode 47000/500000. Epsilon:0.625007049591135
Episode 48000/500000. Epsilon:0.6187880946001777
Episode 49000/500000. Epsilon:0.6126310195531429
Episode 50000/500000. Epsilon:0.6065352087312362
Episode 51000/500000. Epsilon:0.6005000525422023
Episode 52000/500000. Epsilon:0.594524947459356
Episode 53000/500000. Epsilon:0.5886092959612332
Episode 54000/500000. Epsilon:0.5827525064718384
Episode 55000/500000. Epsilon:0.5769539933014882
Episode 56000/500000. Epsilon:0.5712131765882352
Episode 57000/500000. Epsilon:0.5655294822398782
Episode 58000/500000. Epsilon:0.5599023418765654
Episode 59000/500000. Epsilon:0.5543311927739433
Episode 60000/500000. Epsilon:0.5488154778068884
Episode 61000/500000. Epsilon:0.5433546453937917
Episode 62000/500000. Epsilon:0.5379481494413982
Episode 63000/500000. Epsilon:0.5325954492902031
Episode 64000/500000. Epsilon:0.5272960096603766
Episode 65000/500000. Epsilon:0.5220493005982386
Episode 66000/500000. Epsilon:0.5168547974232679
Episode 67000/500000. Epsilon:0.5117119806756192
Episode 68000/500000. Epsilon:0.5066203360641893
Episode 69000/500000. Epsilon:0.5015793544151859
Episode 70000/500000. Epsilon:0.49658853162119926
Episode 71000/500000. Epsilon:0.4916473685908022
Episode 72000/500000. Epsilon:0.4867553711986302
Episode 73000/500000. Epsilon:0.48191205023597694
Episode 74000/500000. Epsilon:0.47711692136186457
Episode 75000/500000. Epsilon:0.4723695050546174
Episode 76000/500000. Epsilon:0.4676693265639004
Episode 77000/500000. Epsilon:0.46301591586324564
Episode 78000/500000. Epsilon:0.45840880760305186
Episode 79000/500000. Epsilon:0.45384754106404723
Episode 80000/500000. Epsilon:0.4493316601112162
Episode 81000/500000. Epsilon:0.44486071314818293
Episode 82000/500000. Epsilon:0.4404342530720544
Episode 83000/500000. Epsilon:0.43605183722870827
Episode 84000/500000. Epsilon:0.4317130273685255
Episode 85000/500000. Epsilon:0.42741738960256587
Episode 86000/500000. Epsilon:0.42316449435917725
Episode 87000/500000. Epsilon:0.4189539163410378
Episode 88000/500000. Epsilon:0.4147852344826255
Episode 89000/500000. Epsilon:0.41065803190811284
Episode 90000/500000. Epsilon:0.4065718958896744
Episode 91000/500000. Epsilon:0.4025264178062193
Episode 92000/500000. Epsilon:0.39852119310252077
Episode 93000/500000. Epsilon:0.3945558212487663
Episode 94000/500000. Epsilon:0.3906299057004998
Episode 95000/500000. Epsilon:0.3867430538589688
Episode 96000/500000. Epsilon:0.38289487703186287
Episode 97000/500000. Epsilon:0.3790849903944446
Episode 98000/500000. Epsilon:0.37531301295106523
Episode 99000/500000. Epsilon:0.371578567497066
Episode 100000/500000. Epsilon:0.36788128058105396
Episode 101000/500000. Epsilon:0.36422078246755907
Episode 102000/500000. Epsilon:0.3605967071000591
Episode 103000/500000. Epsilon:0.35700869206437275
Episode 104000/500000. Epsilon:0.35345637855241946
Episode 105000/500000. Epsilon:0.3499394113263338
Episode 106000/500000. Epsilon:0.3464574386829475
Episode 107000/500000. Epsilon:0.34301011241860885
Episode 108000/500000. Epsilon:0.33959708779437514
Episode 109000/500000. Epsilon:0.33621802350152463
Episode 110000/500000. Epsilon:0.33287258162743316
Episode 111000/500000. Epsilon:0.3295604276217805
Episode 112000/500000. Epsilon:0.3262812302630935
Episode 113000/500000. Epsilon:0.32303466162562283
Episode 114000/500000. Epsilon:0.3198203970465536
Episode 115000/500000. Epsilon:0.3166381150935355
Episode 116000/500000. Epsilon:0.31348749753253985
Episode 117000/500000. Epsilon:0.3103682292960321
Episode 118000/500000. Epsilon:0.30727999845147064
Episode 119000/500000. Epsilon:0.3042224961701107
Episode 120000/500000. Epsilon:0.30119541669611727
Episode 121000/500000. Epsilon:0.2981984573159928
Episode 122000/500000. Epsilon:0.295231318328306
Episode 123000/500000. Epsilon:0.292293703013717
Episode 124000/500000. Epsilon:0.28938531760530994
Episode 125000/500000. Epsilon:0.286505871259211
Episode 126000/500000. Epsilon:0.2836550760255065
Episode 127000/500000. Epsilon:0.28083264681944636
Episode 128000/500000. Epsilon:0.2780383013929358
Episode 129000/500000. Epsilon:0.2752717603063094
Episode 130000/500000. Epsilon:0.27253274690038565
Episode 131000/500000. Epsilon:0.2698209872688034
Episode 132000/500000. Epsilon:0.2671362102306279
Episode 133000/500000. Epsilon:0.2644781473032329
Episode 134000/500000. Epsilon:0.2618465326754508
Episode 135000/500000. Epsilon:0.25924110318099597
Episode 136000/500000. Epsilon:0.25666159827214174
Episode 137000/500000. Epsilon:0.2541077599936689
Episode 138000/500000. Epsilon:0.25157933295706686
Episode 139000/500000. Epsilon:0.24907606431499524
Episode 140000/500000. Epsilon:0.24659770373600165
Episode 141000/500000. Epsilon:0.24414400337948428
Episode 142000/500000. Epsilon:0.2417147178709092
Episode 143000/500000. Epsilon:0.2393096042772712
Episode 144000/500000. Epsilon:0.23692842208280143
Episode 145000/500000. Epsilon:0.23457093316491587
Episode 146000/500000. Epsilon:0.2322369017704
Episode 147000/500000. Epsilon:0.22992609449183488
Episode 148000/500000. Epsilon:0.22763828024425667
Episode 149000/500000. Epsilon:0.22537323024204614
Episode 150000/500000. Epsilon:0.22313071797605036
Episode 151000/500000. Epsilon:0.22091051919093144
Episode 152000/500000. Epsilon:0.21871241186274046
Episode 153000/500000. Epsilon:0.2165361761767145
Episode 154000/500000. Epsilon:0.21438159450529537
Episode 155000/500000. Epsilon:0.21224845138636547
Episode 156000/500000. Epsilon:0.2101365335017017
Episode 157000/500000. Epsilon:0.20804562965564455
Episode 158000/500000. Epsilon:0.20597553075397596
Episode 159000/500000. Epsilon:0.20392602978301028
Episode 160000/500000. Epsilon:0.20189692178889243
Episode 161000/500000. Epsilon:0.19988800385710417
Episode 162000/500000. Epsilon:0.19789907509216778
Episode 163000/500000. Epsilon:0.19592993659756106
Episode 164000/500000. Epsilon:0.19398039145582469
Episode 165000/500000. Epsilon:0.19205024470887014
Episode 166000/500000. Epsilon:0.1901393033384835
Episode 167000/500000. Epsilon:0.18824737624702498
Episode 168000/500000. Epsilon:0.1863742742383163
Episode 169000/500000. Epsilon:0.18451980999872286
Episode 170000/500000. Epsilon:0.18268379807842114
Episode 171000/500000. Epsilon:0.18086605487285262
Episode 172000/500000. Epsilon:0.17906639860436402
Episode 173000/500000. Epsilon:0.17728464930402862
Episode 174000/500000. Epsilon:0.17552062879365085
Episode 175000/500000. Epsilon:0.17377416066794543
Episode 176000/500000. Epsilon:0.1720450702768974
Episode 177000/500000. Epsilon:0.17033318470829728
Episode 178000/500000. Epsilon:0.1686383327704509
Episode 179000/500000. Epsilon:0.16696034497505613
Episode 180000/500000. Epsilon:0.16529905352025773
Episode 181000/500000. Epsilon:0.16365429227386377
Episode 182000/500000. Epsilon:0.16202589675673515
Episode 183000/500000. Epsilon:0.1604137041263337
Episode 184000/500000. Epsilon:0.1588175531604412
Episode 185000/500000. Epsilon:0.15723728424103497
Episode 186000/500000. Epsilon:0.1556727393383252
Episode 187000/500000. Epsilon:0.15412376199495392
Episode 188000/500000. Epsilon:0.15259019731034656
Episode 189000/500000. Epsilon:0.1510718919252227
Episode 190000/500000. Epsilon:0.14956869400626097
Episode 191000/500000. Epsilon:0.14808045323091348
Episode 192000/500000. Epsilon:0.146607020772374
Episode 193000/500000. Epsilon:0.14514824928469533
Episode 194000/500000. Epsilon:0.14370399288805444
Episode 195000/500000. Epsilon:0.1422741071541636
Episode 196000/500000. Epsilon:0.14085844909182785
Episode 197000/500000. Epsilon:0.1394568771326456
Episode 198000/500000. Epsilon:0.1380692511168514
Episode 199000/500000. Epsilon:0.13669543227929998
Episode 200000/500000. Epsilon:0.13533528323558813
Episode 201000/500000. Epsilon:0.13398866796831882
Episode 202000/500000. Epsilon:0.13265545181349583
Episode 203000/500000. Epsilon:0.13133550144705924
Episode 204000/500000. Epsilon:0.1300286848715531
Episode 205000/500000. Epsilon:0.12873487140292333
Episode 206000/500000. Epsilon:0.12745393165745156
Episode 207000/500000. Epsilon:0.12618573753881418
Episode 208000/500000. Epsilon:0.1249301622252745
Episode 209000/500000. Epsilon:0.12368708015699974
Episode 210000/500000. Epsilon:0.12245636702350393
Episode 211000/500000. Epsilon:0.1212378997512171
Episode 212000/500000. Epsilon:0.12003155649117843
Episode 213000/500000. Epsilon:0.11883721660684982
Episode 214000/500000. Epsilon:0.11765476066205353
Episode 215000/500000. Epsilon:0.11648407040902709
Episode 216000/500000. Epsilon:0.1153250287765987
Episode 217000/500000. Epsilon:0.11417751985848065
Episode 218000/500000. Epsilon:0.11304142890167698
Episode 219000/500000. Epsilon:0.1119166422950092
Episode 220000/500000. Epsilon:0.11080304755775458
Episode 221000/500000. Epsilon:0.10970053332839765
Episode 222000/500000. Epsilon:0.10860898935349421
Episode 223000/500000. Epsilon:0.10752830647664582
Episode 224000/500000. Epsilon:0.10645837662758331
Episode 225000/500000. Epsilon:0.10539909281136023
Episode 226000/500000. Epsilon:0.10435034909765287
Episode 227000/500000. Epsilon:0.10331204061016683
Episode 228000/500000. Epsilon:0.10228406351615035
Episode 229000/500000. Epsilon:0.10126631501600891
Episode 230000/500000. Epsilon:0.10025869333302712
Episode 231000/500000. Epsilon:0.09926109770318882
Episode 232000/500000. Epsilon:0.09827342836510217
Episode 233000/500000. Epsilon:0.09729558655002292
Episode 234000/500000. Epsilon:0.09632747447197634
Episode 235000/500000. Epsilon:0.09536899531797985
Episode 236000/500000. Epsilon:0.0944200532383611
Episode 237000/500000. Epsilon:0.09348055333717237
Episode 238000/500000. Epsilon:0.09255040166270088
Episode 239000/500000. Epsilon:0.09162950519807392
Episode 240000/500000. Epsilon:0.09071777185195674
Episode 241000/500000. Epsilon:0.08981511044934297
Episode 242000/500000. Epsilon:0.08892143072243802
Episode 243000/500000. Epsilon:0.08803664330163041
Episode 244000/500000. Epsilon:0.08716065970655598
Episode 245000/500000. Epsilon:0.08629339233724898
Episode 246000/500000. Epsilon:0.08543475446538257
Episode 247000/500000. Epsilon:0.08458466022559562
Episode 248000/500000. Epsilon:0.08374302460690551
Episode 249000/500000. Epsilon:0.08290976344420709
Episode 250000/500000. Epsilon:0.08208479340985664
Episode 251000/500000. Epsilon:0.08126803200533818
Episode 252000/500000. Epsilon:0.08045939755301396
Episode 253000/500000. Epsilon:0.07965880918795606
Episode 254000/500000. Epsilon:0.07886618684986052
Episode 255000/500000. Epsilon:0.07808145127504029
Episode 256000/500000. Epsilon:0.07730452398849871
Episode 257000/500000. Epsilon:0.07653532729608342
Episode 258000/500000. Epsilon:0.07577378427671448
Episode 259000/500000. Epsilon:0.0750198187746938
Episode 260000/500000. Epsilon:0.07427335539208905
Episode 261000/500000. Epsilon:0.07353431948119332
Episode 262000/500000. Epsilon:0.07280263713706094
Episode 263000/500000. Epsilon:0.07207823519011576
Episode 264000/500000. Epsilon:0.07136104119883488
Episode 265000/500000. Epsilon:0.07065098344250441
Episode 266000/500000. Epsilon:0.06994799091404688
Episode 267000/500000. Epsilon:0.06925199331292094
Episode 268000/500000. Epsilon:0.06856292103809009
Episode 269000/500000. Epsilon:0.06788070518106341
Episode 270000/500000. Epsilon:0.06720527751900482
Episode 271000/500000. Epsilon:0.0665365705079098
Episode 272000/500000. Epsilon:0.06587451727585103
Episode 273000/500000. Epsilon:0.06521905161629142
Episode 274000/500000. Epsilon:0.06457010798146373
Episode 275000/500000. Epsilon:0.06392762147581443
Episode 276000/500000. Epsilon:0.06329152784951508
Episode 277000/500000. Epsilon:0.06266176349203695
Episode 278000/500000. Epsilon:0.062038265425789586
Episode 279000/500000. Epsilon:0.06142097129982333
Episode 280000/500000. Epsilon:0.06080981938359398
Episode 281000/500000. Epsilon:0.060204748560789285
Episode 282000/500000. Epsilon:0.05960569832321766
Episode 283000/500000. Epsilon:0.0590126087647571
Episode 284000/500000. Epsilon:0.0584254205753645
Episode 285000/500000. Epsilon:0.05784407503514449
Episode 286000/500000. Epsilon:0.05726851400847708
Episode 287000/500000. Epsilon:0.0566986799382043
Episode 288000/500000. Epsilon:0.0561345158398745
Episode 289000/500000. Epsilon:0.05557596529604344
Episode 290000/500000. Epsilon:0.05502297245063284
Episode 291000/500000. Epsilon:0.054475482003344226
Episode 292000/500000. Epsilon:0.05393343920412908
Episode 293000/500000. Epsilon:0.05339678984771401
Episode 294000/500000. Epsilon:0.05286548026817925
Episode 295000/500000. Epsilon:0.05233945733359292
Episode 296000/500000. Epsilon:0.05181866844069687
Episode 297000/500000. Epsilon:0.05130306150964736
Episode 298000/500000. Epsilon:0.05079258497880567
Episode 299000/500000. Epsilon:0.050287187799583154
Episode 300000/500000. Epsilon:0.05
Episode 301000/500000. Epsilon:0.05
Episode 302000/500000. Epsilon:0.05
Episode 303000/500000. Epsilon:0.05
Episode 304000/500000. Epsilon:0.05
Episode 305000/500000. Epsilon:0.05
Episode 306000/500000. Epsilon:0.05
Episode 307000/500000. Epsilon:0.05
Episode 308000/500000. Epsilon:0.05
Episode 309000/500000. Epsilon:0.05
Episode 310000/500000. Epsilon:0.05
Episode 311000/500000. Epsilon:0.05
Episode 312000/500000. Epsilon:0.05
Episode 313000/500000. Epsilon:0.05
Episode 314000/500000. Epsilon:0.05
Episode 315000/500000. Epsilon:0.05
Episode 316000/500000. Epsilon:0.05
Episode 317000/500000. Epsilon:0.05
Episode 318000/500000. Epsilon:0.05
Episode 319000/500000. Epsilon:0.05
Episode 320000/500000. Epsilon:0.05
Episode 321000/500000. Epsilon:0.05
Episode 322000/500000. Epsilon:0.05
Episode 323000/500000. Epsilon:0.05
Episode 324000/500000. Epsilon:0.05
Episode 325000/500000. Epsilon:0.05
Episode 326000/500000. Epsilon:0.05
Episode 327000/500000. Epsilon:0.05
Episode 328000/500000. Epsilon:0.05
Episode 329000/500000. Epsilon:0.05
Episode 330000/500000. Epsilon:0.05
Episode 331000/500000. Epsilon:0.05
Episode 332000/500000. Epsilon:0.05
Episode 333000/500000. Epsilon:0.05
Episode 334000/500000. Epsilon:0.05
Episode 335000/500000. Epsilon:0.05
Episode 336000/500000. Epsilon:0.05
Episode 337000/500000. Epsilon:0.05
Episode 338000/500000. Epsilon:0.05
Episode 339000/500000. Epsilon:0.05
Episode 340000/500000. Epsilon:0.05
Episode 341000/500000. Epsilon:0.05
Episode 342000/500000. Epsilon:0.05
Episode 343000/500000. Epsilon:0.05
Episode 344000/500000. Epsilon:0.05
Episode 345000/500000. Epsilon:0.05
Episode 346000/500000. Epsilon:0.05
Episode 347000/500000. Epsilon:0.05
Episode 348000/500000. Epsilon:0.05
Episode 349000/500000. Epsilon:0.05
Episode 350000/500000. Epsilon:0.05
Episode 351000/500000. Epsilon:0.05
Episode 352000/500000. Epsilon:0.05
Episode 353000/500000. Epsilon:0.05
Episode 354000/500000. Epsilon:0.05
Episode 355000/500000. Epsilon:0.05
Episode 356000/500000. Epsilon:0.05
Episode 357000/500000. Epsilon:0.05
Episode 358000/500000. Epsilon:0.05
Episode 359000/500000. Epsilon:0.05
Episode 360000/500000. Epsilon:0.05
Episode 361000/500000. Epsilon:0.05
Episode 362000/500000. Epsilon:0.05
Episode 363000/500000. Epsilon:0.05
Episode 364000/500000. Epsilon:0.05
Episode 365000/500000. Epsilon:0.05
Episode 366000/500000. Epsilon:0.05
Episode 367000/500000. Epsilon:0.05
Episode 368000/500000. Epsilon:0.05
Episode 369000/500000. Epsilon:0.05
Episode 370000/500000. Epsilon:0.05
Episode 371000/500000. Epsilon:0.05
Episode 372000/500000. Epsilon:0.05
Episode 373000/500000. Epsilon:0.05
Episode 374000/500000. Epsilon:0.05
Episode 375000/500000. Epsilon:0.05
Episode 376000/500000. Epsilon:0.05
Episode 377000/500000. Epsilon:0.05
Episode 378000/500000. Epsilon:0.05
Episode 379000/500000. Epsilon:0.05
Episode 380000/500000. Epsilon:0.05
Episode 381000/500000. Epsilon:0.05
Episode 382000/500000. Epsilon:0.05
Episode 383000/500000. Epsilon:0.05
Episode 384000/500000. Epsilon:0.05
Episode 385000/500000. Epsilon:0.05
Episode 386000/500000. Epsilon:0.05
Episode 387000/500000. Epsilon:0.05
Episode 388000/500000. Epsilon:0.05
Episode 389000/500000. Epsilon:0.05
Episode 390000/500000. Epsilon:0.05
Episode 391000/500000. Epsilon:0.05
Episode 392000/500000. Epsilon:0.05
Episode 393000/500000. Epsilon:0.05
Episode 394000/500000. Epsilon:0.05
Episode 395000/500000. Epsilon:0.05
Episode 396000/500000. Epsilon:0.05
Episode 397000/500000. Epsilon:0.05
Episode 398000/500000. Epsilon:0.05
Episode 399000/500000. Epsilon:0.05
Episode 400000/500000. Epsilon:0.05
Episode 401000/500000. Epsilon:0.05
Episode 402000/500000. Epsilon:0.05
Episode 403000/500000. Epsilon:0.05
Episode 404000/500000. Epsilon:0.05
Episode 405000/500000. Epsilon:0.05
Episode 406000/500000. Epsilon:0.05
Episode 407000/500000. Epsilon:0.05
Episode 408000/500000. Epsilon:0.05
Episode 409000/500000. Epsilon:0.05
Episode 410000/500000. Epsilon:0.05
Episode 411000/500000. Epsilon:0.05
Episode 412000/500000. Epsilon:0.05
Episode 413000/500000. Epsilon:0.05
Episode 414000/500000. Epsilon:0.05
Episode 415000/500000. Epsilon:0.05
Episode 416000/500000. Epsilon:0.05
Episode 417000/500000. Epsilon:0.05
Episode 418000/500000. Epsilon:0.05
Episode 419000/500000. Epsilon:0.05
Episode 420000/500000. Epsilon:0.05
Episode 421000/500000. Epsilon:0.05
Episode 422000/500000. Epsilon:0.05
Episode 423000/500000. Epsilon:0.05
Episode 424000/500000. Epsilon:0.05
Episode 425000/500000. Epsilon:0.05
Episode 426000/500000. Epsilon:0.05
Episode 427000/500000. Epsilon:0.05
Episode 428000/500000. Epsilon:0.05
Episode 429000/500000. Epsilon:0.05
Episode 430000/500000. Epsilon:0.05
Episode 431000/500000. Epsilon:0.05
Episode 432000/500000. Epsilon:0.05
Episode 433000/500000. Epsilon:0.05
Episode 434000/500000. Epsilon:0.05
Episode 435000/500000. Epsilon:0.05
Episode 436000/500000. Epsilon:0.05
Episode 437000/500000. Epsilon:0.05
Episode 438000/500000. Epsilon:0.05
Episode 439000/500000. Epsilon:0.05
Episode 440000/500000. Epsilon:0.05
Episode 441000/500000. Epsilon:0.05
Episode 442000/500000. Epsilon:0.05
Episode 443000/500000. Epsilon:0.05
Episode 444000/500000. Epsilon:0.05
Episode 445000/500000. Epsilon:0.05
Episode 446000/500000. Epsilon:0.05
Episode 447000/500000. Epsilon:0.05
Episode 448000/500000. Epsilon:0.05
Episode 449000/500000. Epsilon:0.05
Episode 450000/500000. Epsilon:0.05
Episode 451000/500000. Epsilon:0.05
Episode 452000/500000. Epsilon:0.05
Episode 453000/500000. Epsilon:0.05
Episode 454000/500000. Epsilon:0.05
Episode 455000/500000. Epsilon:0.05
Episode 456000/500000. Epsilon:0.05
Episode 457000/500000. Epsilon:0.05
Episode 458000/500000. Epsilon:0.05
Episode 459000/500000. Epsilon:0.05
Episode 460000/500000. Epsilon:0.05
Episode 461000/500000. Epsilon:0.05
Episode 462000/500000. Epsilon:0.05
Episode 463000/500000. Epsilon:0.05
Episode 464000/500000. Epsilon:0.05
Episode 465000/500000. Epsilon:0.05
Episode 466000/500000. Epsilon:0.05
Episode 467000/500000. Epsilon:0.05
Episode 468000/500000. Epsilon:0.05
Episode 469000/500000. Epsilon:0.05
Episode 470000/500000. Epsilon:0.05
Episode 471000/500000. Epsilon:0.05
Episode 472000/500000. Epsilon:0.05
Episode 473000/500000. Epsilon:0.05
Episode 474000/500000. Epsilon:0.05
Episode 475000/500000. Epsilon:0.05
Episode 476000/500000. Epsilon:0.05
Episode 477000/500000. Epsilon:0.05
Episode 478000/500000. Epsilon:0.05
Episode 479000/500000. Epsilon:0.05
Episode 480000/500000. Epsilon:0.05
Episode 481000/500000. Epsilon:0.05
Episode 482000/500000. Epsilon:0.05
Episode 483000/500000. Epsilon:0.05
Episode 484000/500000. Epsilon:0.05
Episode 485000/500000. Epsilon:0.05
Episode 486000/500000. Epsilon:0.05
Episode 487000/500000. Epsilon:0.05
Episode 488000/500000. Epsilon:0.05
Episode 489000/500000. Epsilon:0.05
Episode 490000/500000. Epsilon:0.05
Episode 491000/500000. Epsilon:0.05
Episode 492000/500000. Epsilon:0.05
Episode 493000/500000. Epsilon:0.05
Episode 494000/500000. Epsilon:0.05
Episode 495000/500000. Epsilon:0.05
Episode 496000/500000. Epsilon:0.05
Episode 497000/500000. Epsilon:0.05
Episode 498000/500000. Epsilon:0.05
Episode 499000/500000. Epsilon:0.05
Episode 500000/500000. Epsilon:0.05
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
i = 0
for k, v in Q.items():
print(k, v)
print(Q2[k])
i=i+1
if i >5:
break
###Output
(14, 6, False) [-0.6272418 -0.70854277]
[-0.6272418 -0.70854277]
(7, 10, False) [-1.38852022 -1.09447148]
[-1.38852022 -1.09447148]
(13, 10, False) [-1.03495696 -1.37289326]
[-1.03495696 -1.37289326]
(14, 10, False) [-1.35861858 -1.0927732 ]
[-1.35861858 -1.0927732 ]
(5, 1, False) [-1.45592846 -0.88090486]
[-1.45592846 -0.88090486]
(15, 1, False) [-1.52094142 -1.14972375]
[-1.52094142 -1.14972375]
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(13, 2, False)
End game! Reward: 1.0
You won :)
(19, 2, False)
End game! Reward: -1.0
You lost :(
(12, 5, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v1')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(13, 2, True)
(20, 2, True)
(15, 2, False)
End game! Reward: -1.0
You lost :(
(12, 9, False)
End game! Reward: -1.0
You lost :(
(20, 10, False)
End game! Reward: 0.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((10, 10, False), 1, 0.0), ((14, 10, False), 1, 0.0), ((15, 10, False), 1, -1.0)]
[((13, 6, False), 1, -1.0)]
[((18, 10, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
episode = generate_episode_from_limit_stochastic(env)
for [state, action, reward] in episode:
returns_sum[state][action] += reward # sum(rewards[i:]*discounts[:-(1+i)])
N[state][action] += 1
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
Q = mc_prediction_q(env, 10, generate_episode_from_limit_stochastic)
print(Q)
###Output
defaultdict(<function mc_prediction_q.<locals>.<lambda> at 0x7ff4b8f8d310>, {(20, 10, False): array([1., 0.]), (20, 9, False): array([1., 0.]), (14, 5, False): array([0., 0.]), (15, 5, False): array([ 0., -1.]), (12, 4, False): array([ 0., -1.]), (14, 1, False): array([0., 0.]), (16, 1, False): array([0., 0.]), (21, 1, False): array([1., 0.]), (9, 10, False): array([1., 0.]), (17, 2, False): array([0., 0.]), (15, 9, False): array([ 0., -1.]), (17, 1, False): array([1., 0.]), (15, 10, False): array([ 0., -1.])})
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
!pip install gym
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(10):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(12, 10, False)
End game! Reward: -1.0
You lost :(
(15, 9, False)
(20, 9, False)
End game! Reward: 1.0
You won :)
(20, 5, False)
End game! Reward: -1
You lost :(
(8, 4, False)
(19, 4, True)
(14, 4, False)
End game! Reward: 1.0
You won :)
(19, 9, False)
(20, 9, False)
End game! Reward: 1.0
You won :)
(14, 6, False)
End game! Reward: -1.0
You lost :(
(18, 5, False)
End game! Reward: -1
You lost :(
(11, 7, False)
(21, 7, False)
End game! Reward: 1.0
You won :)
(5, 5, False)
End game! Reward: 1.0
You won :)
(15, 9, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(10):
print(generate_episode_from_limit_stochastic(env))
###Output
[((15, 3, False), 1, -1)]
[((13, 9, False), 1, -1)]
[((15, 9, False), 1, 0), ((18, 9, False), 1, -1)]
[((18, 5, True), 1, 0), ((12, 5, False), 1, 0), ((21, 5, False), 0, 1.0)]
[((21, 3, True), 0, 1.0)]
[((11, 4, False), 1, 0), ((18, 4, False), 1, 0), ((21, 4, False), 0, 1.0)]
[((9, 3, False), 1, 0), ((12, 3, False), 0, 1.0)]
[((14, 8, False), 0, 1.0)]
[((8, 2, False), 1, 0), ((19, 2, True), 0, 1.0)]
[((19, 6, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
visited_states = set()
episode = generate_episode(env)
for i, (state, action, reward) in enumerate(episode):
total_reward = reward
for j in range(len(episode) - i - 1):
total_reward += episode[j+i+1][2] * gamma ** j
if state not in visited_states:
if state not in returns_sum:
returns_sum[state] = [0, 0]
returns_sum[state][action] += total_reward
if state not in N:
N[state] = [0, 0]
N[state][action] += 1
visited_states.add(state)
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_policy(q_table):
policy = {}
for state in q_table.keys():
policy[state] = np.argmax(q_table[state])
return policy
def take_action(state, policy, epsilon):
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
if (state in policy) and np.random.random() > epsilon:
action = policy[state]
return action
def generate_episode(env, policy, epsilon):
episode = []
state = env.reset()
while True:
action = take_action(state, policy, epsilon)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon_min = 0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(1/i_episode, epsilon_min)
policy = generate_policy(Q)
episode = generate_episode(env, policy, epsilon)
visited_states = set()
for i, (state, action, reward) in enumerate(episode):
total_reward = reward
for j in range(len(episode) - i - 1):
total_reward += episode[j+i+1][2] * gamma ** j
if state not in visited_states:
visited_states.add(state)
Q[state][action] += alpha * (total_reward - Q[state][action])
policy = generate_policy(Q)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 1000000, 0.01)
###Output
Episode 1000000/1000000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(15, 3, False)
End game! Reward: 1.0
You won :)
(15, 10, False)
(19, 10, False)
End game! Reward: -1.0
You lost :(
(19, 1, False)
(21, 1, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((9, 10, False), 1, 0.0), ((14, 10, False), 1, 0.0), ((15, 10, False), 1, -1.0)]
[((12, 4, True), 0, -1.0)]
[((17, 8, False), 1, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
x = generate_episode_from_limit_stochastic(env)
s, a, r = zip(*x)
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(states)+1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:] * discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]]/N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
probs = np.array([epsilon/nA] * nA)
probs[np.argmax(Q[state])] += 1-epsilon
action = np.random.choice(np.arange(nA), p=probs)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(states)+1)])
for i, state in enumerate(states):
G = sum(rewards[i:] * discounts[:-(1+i)] / (i+1))
Q[state][actions[i]] += alpha * (G - Q[state][actions[i]])
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon=1.0, eps_decay=0.999999, eps_min=0.1):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(epsilon*eps_decay, eps_min)
episode = generate_episode_from_Q(env, Q, epsilon, nA)
Q = update_Q(env, episode, Q, alpha, gamma)
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 700000, 0.02, eps_min=0.01)
###Output
Episode 700000/700000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate an episode
episode = generate_episode(env)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())gi
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
print(["stick" if action == 0 else "hit"])
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(21, 8, True)
['stick']
End game! Reward: 1.0
You won :)
(12, 9, False)
['hit']
(18, 9, False)
['hit']
End game! Reward: -1
You lost :(
(13, 10, False)
['stick']
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
action_ = ["stick" if action == 0 else "hit"]
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((8, 8, False), 1, 0), ((16, 8, False), 1, 0), ((19, 8, False), 0, 1.0)]
[((14, 9, False), 1, 0), ((18, 9, False), 0, -1.0)]
[((15, 10, False), 1, 0), ((16, 10, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
print(Q)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
for i,sample in enumerate(episode):
state, action, reward = sample
returns_sum[state][action] += reward * (gamma**i)
N[state][action] += 1.0
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
defaultdict(<function mc_prediction_q.<locals>.<lambda> at 0x7f2ddb6d1598>, {})
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env,generate_episode, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
N = defaultdict(lambda: np.zeros(nA))
returns_sum = defaultdict(lambda: np.zeros(nA))
policy = defaultdict()
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
for i,sample in enumerate(episode):
state, action, reward = sample
returns_sum[state][action] += reward * (gamma**i)
N[state][action] += 1.0
Q[state][action] = returns_sum[state][action] / N[state][action]
if i_episode % alpha == 0:
policy[state] = np.argmax(Q[state])
return policy, Q
def get_probs(env, state, Q, epsilon):
n = env.action_space.n
argmax = np.argmax(Q[state])
probs = [epsilon/n if i != argmax else 1 - epsilon + epsilon/n for i in range(n)]
return probs
def generate_episode(env, Q, epsilon):
"""
According to Q, generate policies based on epsilon (Exploration vs. Explotation),
Q: This is a dictionary (of one-dimensional arrays)
where Q[s][a] is the estimated action value corresponding to state s and action a.
"""
episode = []
state = env.reset()
while True:
# dynamic probability
probs = get_probs(env,state, Q, epsilon)
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(Q, count, state, action, reward, alpha, gamma):
if state in Q.keys():
old_Q = Q[state][action]
Q[state][action] += alpha*(reward*(gamma**count) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.9999,eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# generate episode
episode = generate_episode(env, Q, epsilon)
# random choice policy or optimal policy, depends on random choice
for i,sample in enumerate(episode):
state, action, reward = sample
Q = update_Q(Q, i, state, action, reward, alpha, gamma)
# decaying epsilon.
if epsilon > eps_min:
epsilon *= eps_decay
else:
epsilon = eps_min
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.1)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(5):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(11, 2, False)
End game! Reward: -1.0
You lost :(
(17, 3, False)
(19, 3, False)
End game! Reward: 1.0
You won :)
(17, 10, True)
End game! Reward: 0.0
You lost :(
(19, 10, True)
(21, 10, True)
(14, 10, False)
End game! Reward: 1.0
You won :)
(15, 6, False)
(18, 6, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(15):
print(generate_episode_from_limit_stochastic(env))
###Output
[((16, 3, False), 1, -1)]
[((7, 10, False), 0, -1.0)]
[((12, 10, False), 0, -1.0)]
[((9, 3, False), 0, -1.0)]
[((20, 10, True), 1, 0), ((17, 10, False), 1, -1)]
[((21, 2, True), 0, 1.0)]
[((20, 8, True), 1, 0), ((20, 8, False), 0, 0.0)]
[((19, 8, False), 0, 1.0)]
[((15, 8, False), 0, -1.0)]
[((20, 5, False), 0, 1.0)]
[((12, 10, False), 1, 0), ((21, 10, False), 0, 1.0)]
[((16, 10, True), 1, 0), ((16, 10, False), 0, -1.0)]
[((15, 6, False), 1, -1)]
[((6, 6, False), 1, 0), ((11, 6, False), 0, 1.0)]
[((9, 9, False), 1, 0), ((19, 9, False), 0, 0.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## DONE: complete the function
# Generate an episode
episode = generate_episode(env)
# Get the variables from that episode
states, actions, rewards = zip(*episode)
# Calculate the discounts for the rewards
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# Generate the Q-Table
for idx, state in enumerate(states):
N[state][actions[idx]] += 1
returns_sum[state][actions[idx]] += sum(rewards[idx:]*discounts[:-(1+idx)])
Q[state][actions[idx]] = returns_sum[state][actions[idx]] / N[state][actions[idx]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def greedy_policy(actions, eps, nA):
# Identify the greedy action
greedy_action = np.argmax(actions)
# Initialize equiprobable policy
policy = np.ones(nA) * (eps / len(actions))
# Emphasizes the greedy action
policy[greedy_action] += 1 - eps
return policy
def generate_episode_from_q(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
try:
# Follow the greedy policy if Q[state] exists
actions = Q[state]
policy = greedy_policy(actions, epsilon, nA)
action = np.random.choice(np.arange(nA), p=policy)
except KeyError:
# Initialize Q[state]
Q[state] = np.zeros(nA)
# Follow the equiprobable policy
action = env.action_space.sample()
# Take a step
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0, eps=1.0, eps_decay=0.999, eps_min=0.1):
nA = env.action_space.n
# initialize empty dictionary
Q = defaultdict()
# Define values for epsilon start and decay
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## DONE: complete the function
# Generate an episode
episode = generate_episode_from_q(env, Q, eps, nA)
# Get the variables from that episode
states, actions, rewards = zip(*episode)
# Calculate the discounts for the rewards
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# Update Q-Table
for idx, state in enumerate(states):
Q[state][actions[idx]] += alpha * \
(sum(rewards[idx:]*discounts[:-(1+idx)]) - Q[state][actions[idx]])
# Decay epsilon
epsilon = max(eps*eps_decay, eps_min)
# Calculate the optimal policy
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.01)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
print(f"Choosing action: {('stick', 'hit')[action]}")
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)') if reward > 0 else print('You lost :(')
# This line only works in my custom modifcation of the gym's blackjack.py file
print(f"Done reason: {info['done_reason']}")
print(f"Dealer cards: {info['dealer_hand']}; sum: {sum(info['dealer_hand'])}")
print()
break
###Output
(17, 5, False)
Choosing action: stick
End game! Reward: -1.0
You lost :(
Done reason: stick
Dealer cards: [5, 3, 10]; sum: 18
(17, 7, False)
Choosing action: hit
End game! Reward: -1.0
You lost :(
Done reason: bust
Dealer cards: [7, 8]; sum: 15
(8, 8, False)
Choosing action: hit
(19, 8, True)
Choosing action: stick
End game! Reward: -1.0
You lost :(
Done reason: stick
Dealer cards: [8, 8, 4]; sum: 20
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((20, 5, False), 0, 1.0)]
[((8, 4, False), 1, 0.0), ((13, 4, False), 1, 0.0), ((16, 4, False), 0, 1.0)]
[((13, 9, True), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`. Pseudocode of implementation
###Code
def compute_returns(episode, gamma):
"""Computes the discounted return at every step of an episode.
This is a generic implementation. It could be simplified for blackjack, since all
rewards are 0 except for the last one."""
returns = []
for step in reversed(episode): # Reverse because the returns are computed accumulating
# discounted rewards starting from of the last state
state, action, reward = step
if len(returns) == 0:
_return = reward
else:
_return = reward + gamma * returns[0]
returns.insert(0, _return)
return returns
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
"""The generate_episode function contains the implementation of the policy"""
# initialize empty dictionaries of arrays
# Note: Since there are 2 possible actions, env.action_space.n == 2 and
# np.zeros(env.action_space.n)) == [ 0. 0.]
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
returns = compute_returns(episode, gamma)
visited_states = set() # For generic implementation on first-visit. This is not
# really needed for BlackJack since in this game the same
# state can't happen more than once per episode
for step, _return in zip(episode, returns):
state, action, reward = step
if state not in visited_states:
visited_states.add(state)
N[state][action] += 1
returns_sum[state][action] += _return
for state in N:
for action in range(env.action_space.n):
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._) Pseudocode of implementation
###Code
def get_greedy_policy(Q, nA):
"""This computes the greedy policy"""
policy = {}
for state in Q:
policy[state] = np.argmax(Q[state])
return policy
def get_epsilon_greedy_action(state, Q, epsilon=0.01):
"""This chooses an action using the epsilon-greedy policy"""
if state not in Q or np.random.random() < epsilon:
action = np.random.choice(np.arange(2)) # Random action
else:
action = np.argmax(Q[state]) # Same as the greedy policy above
return action
def generate_episode_from_limit_e_greedy(bj_env, Q, epsilon):
episode = []
state = bj_env.reset()
done = False
while not done:
action = get_epsilon_greedy_action(state, Q, epsilon)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
return episode
def mc_control(env, initial_Q=None, num_episodes=100000, alpha=0.05, gamma=1.0,
eps_start=1.0, eps_decay=.9995, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = initial_Q if initial_Q is not None else defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\b\b\b\b\b\b\rEpisode {}/{}. epsilon={}".format(
i_episode, num_episodes, round(epsilon, 6)), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode_from_limit_e_greedy(env, Q, epsilon)
epsilon = max(eps_decay * epsilon, eps_min)
returns = compute_returns(episode, gamma)
visited_states = set() # For generic implementation on first-visit. This is not
# really needed for BlackJack since in this game the same
# state can't happen more than once per episode
for step, _return in zip(episode, returns):
state, action, reward = step
if state not in visited_states:
visited_states.add(state)
Q[state][action] += alpha * (_return - Q[state][action])
policy = get_greedy_policy(Q, nA)
print(f"\nLast value of epsilon={epsilon}")
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# Help to find eps_decay
def get_eps_decay(num_episodes=1000000, eps_min=0.001):
# If we want to train 1/3 of the time with eps_min, then:
# eps_min = eps_decay ** (num_episodes * 2 / 3)
eps_decay = eps_min ** (1/(num_episodes * 2 / 3))
print(f"Best eps_decay = {eps_decay}")
get_eps_decay(num_episodes=1000000, eps_min=0.0001)
# obtain the estimated optimal policy and action-value function
#policy, Q = mc_control(env, ?, ?)
#policy, Q = mc_control(env, 10000000, alpha=0.001, gamma=1.0, eps_start=1.0, eps_decay=.9999992, eps_min=0.001)
# This produces almost the optimal policy...
#policy, Q = mc_control(env, 1000000000, alpha=0.0005)
# Trying to improve the almost-optimal polidy with the following led to a worse policy
policy, Q = mc_control(env, initial_Q=Q, num_episodes=1000000, alpha=0.0001,
gamma=1.0, eps_start=1.0, eps_decay=.9999861845, eps_min=0.0001)
###Output
Episode 1000000/1000000. epsilon=0.0001
Last value of epsilon=0.0001
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
!pip install gym
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(17, 10, False)
(20, 10, False)
End game! Reward: 1.0
You won :)
(19, 7, False)
End game! Reward: 1.0
You won :)
(19, 8, False)
(21, 8, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((20, 1, False), 0, -1.0)]
[((21, 10, True), 0, 1.0)]
[((18, 10, False), 1, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
# my version was very wrong, mostly because of issues with transforming the data structures,
# this is the solution recommended by the course, with annotations
# obtain the states, actions, and rewards
# first we zip all the elements of the tuples together in a structure
# then we unpack them into their variables
states, actions, rewards = zip(*episode)
# prepare for discounting
# in this exercise, gamma is 1 so this is just for demonstration, but this MC prediction algorithm woudl still work
# if we were discounting
# so i will an incrementing integer and that will be the exponent that gamma is raised to at each time step. cool.
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
""" This is where I was most confused before. besides zipping the tuples first, this part where you use the index of the
episode to determine which action to look up instead of trying to unpack the states and actions both from episodes.
For some reason I thought that the episodes was some kind of 2D dict and I got stuck there, this way, with them all
separate and the index tying them together works better. """
# note to self, the indexing for discounts, you are getting everything starting with the end up to but not including
# the current timestep.
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
# shouldn't this be unindented? wouldn't we want to do it after iterating all the episodes for fewer calculations?
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
#Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
Q = mc_prediction_q(env, 5000000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 5000000/5000000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode(bj_env, policy):
episode = []
state = bj_env.reset()
while True:
action = policy(state)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
""" probability step funciton, tell me whether I'm greedy or not. num actions is hardcoded"""
def is_greedy(epsilon):
num_actions = 2
# as mentined in lecture, generate a random number between 0 and 1, if it is outside of epsilon probability,
# choose the optimal action in the Q table, otherwise, choose an action at random
chance_of_greedy = [epsilon / num_actions, 1 - epsilon + epsilon / num_actions]
return np.random.choice(np.arange(2), p=chance_of_greedy)
""" takes in the action value function Q and epsilon and returns an epsilon greedy policy based upon it"""
def e_greedy(Q, epsilon):
def check(state):
if is_greedy(epsilon):
return np.where(Q[state] == np.amax(Q[state]))[0][0]
else:
return np.random.choice(np.arange(2))
return check
""" give me a Q table with action values for actions in each state, I'll give you the greedy state-action mapping (policy) """
def greedy(Q):
policy = {}
for state in Q.keys():
policy[state] = np.where(Q[state] == np.amax(Q[state]))[0][0]
return policy
"""
# GLIE epsilon decay. so we are trying to solve the problem of maintaining exploration
# if we don't eventually visit all the states we won't know if we have an optimal policy
# options are "exploring starts" which won't work for blackjack, because the environment randomly picks start
# states and some states aren't available until later in an episode, and
# a stochastic policy that has some probability of not being greedy, epsilon-greedy is what we were taught.
# I'm choosing the suggested epsilon, start at epsilon = 1 which is equiprobable random, then slowly shrink
# and plateau at 0.1 so we still explore enough
"""
def update_epsilon():
if 1/i_episode > 0.1:
return 1/i_episode
else:
return 0.1
def update_epsilon2(epsilon):
if epsilon <= 0.1:
return 0.1
else:
return epsilon * 0.99999
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = 1
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# GLIE epsilon decay, but more practical with the 0.1 clamp
# because if epsilon gets too small we won't explore enough
epsilon = update_epsilon2(epsilon)
# ok, should be possible here to start with equiprobable random policy for blackjack, and then
# each iteration after we evaluate we use this for policy improvement. So I create a policy
# here, but it should be stochastic in what action it chooses.
current_policy = e_greedy(Q, epsilon)
episode = generate_episode(env, current_policy)
# loop through the timesteps
# first and every visit are equivalent for black jack environment
# update the state action entry in the q table to be current value plus
# the constant alpha term times the amount the current reward adds to the value
states, actions, rewards = zip(*episode)
for time_step, state in enumerate(states):
# using the constant alpha here instead of 1/number visits so
# this in contrast to decaying with the inverse of time steps in which later returns aren't likely
# to teach the policy much
Q[state][actions[time_step]] += alpha * (rewards[time_step] - Q[state][actions[time_step]])
return greedy(Q), Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 600000, .02)
###Output
Episode 600000/600000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from sklearn.preprocessing import normalize
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 6, False)
End game! Reward: 0.0
You lost :(
(20, 10, True)
(17, 10, False)
(18, 10, False)
End game! Reward: -1.0
You lost :(
(20, 10, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((14, 1, False), 0, -1.0)]
[((12, 2, True), 0, -1.0)]
[((11, 10, False), 1, 0), ((18, 10, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
epN = len(episode)
states, actions, rewards = zip(*episode)
discounted_reward = 0
for i in reversed(range(epN)):
discounted_reward = rewards[i] + gamma*discounted_reward
N[states[i]][actions[i]] += 1
Q[states[i]][actions[i]] += discounted_reward
for state, actions in N.items():
for action_i in range(env.action_space.n):
if N[state][action_i] != 0:
Q[state][action_i] /= N[state][action_i]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_epsilon(bj_env, Q, epsilon=0.1):
nA = env.action_space.n
episode = []
state = bj_env.reset()
while True:
preferred_action = np.argmax(Q[state])
discriminative_actions = (1-np.absolute(Q[state])) / sum(1-np.absolute(Q[state]))
action = np.random.choice(np.append(preferred_action, np.arange(nA)),
p=np.append(1.0-epsilon, epsilon*discriminative_actions))
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
epsilon = eps_start
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}, epsilon={}.".format(i_episode, num_episodes, epsilon), end="")
sys.stdout.flush()
episode = generate_episode_epsilon(env, Q, epsilon)
epsilon = max(epsilon*eps_decay, eps_min)
epN = len(episode)
states, actions, rewards = zip(*episode)
discounted_reward = 0
for i in reversed(range(epN)):
discounted_reward = rewards[i] + gamma*discounted_reward
#N[states[i]][actions[i]] += 1
Q[states[i]][actions[i]] = (1-alpha)*Q[states[i]][actions[i]] + alpha*discounted_reward
policy = defaultdict(lambda: 0)
for state, values in Q.items(): policy[state] = np.argmax(values)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 2000000, 0.02)
###Output
Episode 2000000/2000000, epsilon=0.05.87187799583154.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
2
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(16, 7, False)
End game! Reward: 1.0
You won :)
(15, 10, False)
(21, 10, False)
End game! Reward: -1.0
You lost :(
(16, 7, False)
(20, 7, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((20, 8, True), 1, 0.0), ((13, 8, False), 1, 0.0), ((15, 8, False), 0, -1.0)]
[((20, 8, True), 0, 1.0)]
[((20, 10, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode_from_limit_stochastic(env)
states,actions,rewards = zip(*episode)
discounts = np.array([gamma ** i for i in range(len(rewards)+1)])
for i,state in enumerate(states):
N[state][actions[i]] += 1
returns_sum[state][actions[i]] += sum(rewards[i:] * discounts[:-(1+i)])
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_with_Q(env,Q,epsilon,numActions):
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(numActions),p=get_probabilities(Q[state],epsilon,numActions))
next_state, reward, done, info = env.step(action)
episode.append((state,action,reward))
state = next_state
if done:
break
return episode
def get_probabilities(Q,epsilon,numActions):
pi_as = np.ones(numActions) * epsilon/numActions
best_action = np.argmax(Q)
pi_as[best_action] += 1-epsilon
return pi_as
def update_Q(env,episode,Q,alpha,discount_rate):
states,actions,rewards = zip(*episode)
discounts = np.array([discount_rate**i for i in range(len(rewards)+1)])
for i,state in enumerate(states):
temp_Q = Q[state][actions[i]]
sample_return_Gt = sum(rewards[i:] * discounts[:-(1+i)])
Q[state][actions[i]] = temp_Q + alpha * (sample_return_Gt - temp_Q)
return Q
def mc_control(env, num_episodes, alpha, discount_rate=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
min_epsilon = 0.05
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(min_epsilon,1/i_episode)
episode = generate_episode_with_Q(env,Q,epsilon,nA)
Q = update_Q(env,episode,Q,alpha,discount_rate)
policy = dict((k,np.argmax(v)) for k,v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.1)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(19, 3, False)
End game! Reward: -1.0
You lost :(
(5, 10, False)
(15, 10, False)
End game! Reward: 1.0
You won :)
(7, 7, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(20):
print(generate_episode_from_limit_stochastic(env))
###Output
[((9, 7, False), 1, 0), ((12, 7, False), 1, 0), ((14, 7, False), 1, 0), ((20, 7, False), 0, 1.0)]
[((15, 3, False), 1, -1)]
[((9, 10, False), 0, -1.0)]
[((19, 7, False), 0, -1.0)]
[((20, 7, False), 0, 1.0)]
[((18, 10, False), 1, -1)]
[((12, 3, False), 1, 0), ((13, 3, False), 1, 0), ((15, 3, False), 1, -1)]
[((17, 8, False), 1, -1)]
[((12, 10, False), 1, 0), ((17, 10, False), 1, 0), ((20, 10, False), 0, 0.0)]
[((8, 8, False), 0, -1.0)]
[((14, 2, False), 1, -1)]
[((20, 8, False), 0, 1.0)]
[((12, 7, True), 1, 0), ((12, 7, False), 1, 0), ((16, 7, False), 1, -1)]
[((13, 8, False), 0, -1.0)]
[((17, 8, True), 1, 0), ((12, 8, False), 1, 0), ((19, 8, False), 1, -1)]
[((16, 8, False), 1, -1)]
[((5, 4, False), 1, 0), ((12, 4, False), 1, 0), ((16, 4, False), 1, -1)]
[((17, 1, False), 1, 0), ((18, 1, False), 1, -1)]
[((15, 2, False), 0, 1.0)]
[((20, 8, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards) + 1)])
# perform every-visit MC prediction
for i in range(len(states)):
returns_sum[states[i]][actions[i]] += sum(rewards[i:] * discounts[:-(1+i)])
N[states[i]][actions[i]] += 1.
Q[states[i]][actions[i]] = returns_sum[states[i]][actions[i]] / N[states[i]][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def get_action_probabilities(Q_s, epsilon, number_of_actions):
policy_s = np.ones(number_of_actions) * epsilon / number_of_actions # initialize probabilities with espilon / N
best_action_index = np.argmax(Q_s) # get action with highest reward
policy_s[best_action_index] = 1 - epsilon + (epsilon / number_of_actions) # assign highest probability to action with hightest reward
return policy_s
def generate_episode_from_Q(env, Q, epsilon, number_of_actions):
episode = []
state = env.reset()
while True:
policy = get_action_probabilities(Q[state], epsilon, number_of_actions)
action = None
if state in Q:
action = np.random.choice(np.arange(number_of_actions), p=policy)
else:
action = env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(episode, Q, alpha, gamma):
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards) + 1)])
for i in range(len(states)):
Q_s_a = Q[states[i]][actions[i]]
Q[states[i]][actions[i]] += alpha * (sum(rewards[i:] * discounts[:-(1+i)]) - Q_s_a)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(epsilon * eps_decay, eps_min)
episode = generate_episode_from_Q(env, Q, epsilon, nA)
Q = update_Q(episode, Q, alpha, gamma)
policy = dict((state, np.argmax(rewards)) for state, rewards in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(12, 5, False)
End game! Reward: 1.0
You won :)
(16, 7, False)
End game! Reward: -1.0
You lost :(
(12, 5, False)
(16, 5, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((19, 2, True), 0, 1.0)]
[((17, 10, False), 1, -1)]
[((10, 9, False), 1, 0), ((17, 9, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
states, actions, rewards = zip(*generate_episode(env))
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i in range(len(rewards)):
N[states[i]][actions[i]] += 1
returns_sum[states[i]][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
Q[states[i]][actions[i]] = returns_sum[states[i]][actions[i]] / N[states[i]][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def get_epsilon_policy(epsilon, Qs, nA):
policy = np.ones(nA) * epsilon / nA
astar = np.argmax(Qs)
policy[astar] = 1 - epsilon + (epsilon / nA)
return policy
def generate_episode_from_epsilon_greedy_poliy(bj_env, Q, epsilon, nA):
episode = []
state = bj_env.reset()
while True:
action = np.random.choice(nA,p=get_epsilon_policy(epsilon,Q[state],nA)) if state in Q else bj_env.action_space.sample()
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epsilon = max(epsilon * eps_decay, eps_min)
states, actions, rewards = zip(*generate_episode_from_epsilon_greedy_poliy(env, Q, epsilon, nA))
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i in range(len(rewards)):
Q[states[i]][actions[i]] += alpha * (sum(rewards[i:]*discounts[:-(1+i)]) - Q[states[i]][actions[i]])
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(19, 8, False)
End game! Reward: -1
You lost :(
(18, 4, False)
End game! Reward: 1.0
You won :)
(18, 2, True)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((14, 4, False), 1, -1)]
[((16, 4, False), 0, -1.0)]
[((11, 6, False), 1, 0), ((21, 6, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## List (state, action, reward)
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
# Discounts
discounts = np.array([gamma ** idx for idx in range(len(states) + 1)])
for idx, state in enumerate(states):
returns_sum[state][actions[idx]] += sum(rewards[idx:] * discounts[:-(1 + idx)])
N[state][actions[idx]] += 1
Q[state][actions[idx]] = returns_sum[state][actions[idx]] / N[state][actions[idx]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
policy_s = epsilon / nA * np.ones(nA)
policy_s[np.argmax(Q_s)] += 1 - epsilon
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for idx, state in enumerate(states):
Q[state][actions[idx]] += alpha * (sum(rewards[idx:]*discounts[:-(1+idx)]) - Q[state][actions[idx]])
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.2)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(12, 10, False)
End game! Reward: -1.0
You lost :(
(16, 10, True)
(17, 10, True)
(21, 10, True)
End game! Reward: 1.0
You won :)
(19, 7, False)
(20, 7, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((9, 4, False), 1, 0), ((16, 4, False), 0, 1.0)]
[((20, 3, False), 1, -1)]
[((12, 5, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate an episode
episode = generate_episode(env)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 1000000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
len(Q)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon) / nA
return policy_s
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p = get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon * eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
import random
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
print ((env.observation_space.shape))
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
None
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(11, 9, False)
End game! Reward: -1.0
You lost :(
(16, 10, False)
End game! Reward: -1.0
You lost :(
(15, 8, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((9, 9, False), 1, 0.0), ((19, 9, False), 0, -1.0)]
[((11, 10, False), 1, 0.0), ((18, 10, False), 0, -1.0)]
[((16, 5, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epi_list = generate_episode(env)
# print (epi_list)
for i, (s,a,r) in enumerate(epi_list):
# print (r)
N[s][a]+=1
G = 0
for j in range(i,len(epi_list)-2):
G+=epi_list[j][2]
# print ('G: ',G)
returns_sum[s][a]+=G+r
# print ('sum: ', returns_sum[s][a])
## TODO: complete the function
for state in N.keys():
for action in range(env.action_space.n):
Q[state][action] = returns_sum[state][action]/N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 50000, generate_episode_from_limit_stochastic)
#print (Q[6])
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 50000/50000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
# added by saeid
def generate_episode_eps_greedy(bj_env,eps,Q):
'''
episode = []
state = bj_env.reset()
# print ('eps: ', eps)
while True:
#probs = [eps, 1-eps]
if random.uniform(0, 1) < eps:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
#action = np.random.choice(np.arange(2))
else:
action = np.argmax(Q[state])
# print ('action: ', action)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
'''
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = bj_env.reset()
nA = bj_env.action_space.n
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], eps, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
policy = {}
G = {}
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# start eps = 0.9 end_eps =0.1
eps = -0.8/num_episodes*(i_episode -1)+ 0.9
epi_list = generate_episode_eps_greedy(env,eps,Q)
# print (epi_list)
for i, (s,a,r) in enumerate(epi_list):
# print (r)
G[s] =0
for j in range(i,len(epi_list)):
G[s]+=epi_list[j][2]
# print ('G: ',G)
#returns_sum[s][a]+=G+r
# print ('sum: ', returns_sum[s][a])
## TODO: complete the function
for state, action,_ in epi_list:
Q[state][action] = (1-alpha)* Q[state][action] +alpha*G[state]
policy[state] = np.argmax(Q[state]) #np.max was wrong
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v1')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(16, 8, False)
End game! Reward: 1.0
You won :)
(19, 5, False)
End game! Reward: 1.0
You won :)
(8, 8, False)
(18, 8, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(6):
print(generate_episode_from_limit_stochastic(env))
###Output
[((17, 6, False), 1, -1.0)]
[((15, 4, False), 1, -1.0)]
[((18, 8, False), 1, -1.0)]
[((17, 7, False), 1, 0.0), ((18, 7, False), 1, -1.0)]
[((15, 9, False), 1, -1.0)]
[((14, 5, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
episode_len = len(states)
discounts = np.array([gamma**i for i in range(episode_len + 1)])
visited = defaultdict(lambda: np.zeros(env.action_space.n))
for i, state_t in enumerate(states):
action_t = actions[i]
if not visited[state_t][action_t]:
visited[state_t][action_t] = 1
N[state_t][action_t] += 1
Q[state_t][action_t] += sum(rewards[i:] * discounts[:episode_len - i])
for state in Q.keys():
Q[state] /= N[state]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
from enum import IntEnum
class EPS_ACTION(IntEnum):
GREEDY = 0
NON_GREEDY = 1
len(EPS_ACTION)
def get_policy_action(Q, policy, state, epsilon):
Q_state = Q[state]
action =np.random.choice([
int(EPS_ACTION.GREEDY),
int(EPS_ACTION.NON_GREEDY)],
p=[1-epsilon, epsilon])
if action == EPS_ACTION.GREEDY:
policy[state] = np.argmax(Q_state)
else:
policy[state] = np.random.choice(len(Q_state))
return policy[state]
def generate_episode_from_policy(bj_env, Q, policy, epsilon):
episode = []
state = bj_env.reset()
while True:
action = get_policy_action(Q, policy, state, epsilon) if state in Q else bj_env.action_space.sample()
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_decay=0.99999, min_eps_value=0.1):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
policy = defaultdict(lambda: np.random.choice(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epsilon = max(1/i_episode, min_eps_value)
episode = generate_episode_from_policy(env, Q, policy, epsilon)
states, actions, rewards = zip(*episode)
episode_len = len(states)
discounts = np.array([gamma**i for i in range(episode_len + 1)])
# first visit MC
visited = defaultdict(lambda: np.zeros(env.action_space.n))
for i, state_t in enumerate(states):
action_t = actions[i]
if not visited[state_t][action_t]:
visited[state_t][action_t] = 1
reward_t = sum(rewards[i:] * discounts[:episode_len - i])
Q[state_t][action_t] += alpha*(reward_t - Q[state_t][action_t])
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 1000000, 0.02, gamma=0)
###Output
Episode 1000000/1000000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
#from math import sqrt
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action) # FA: info is empty by default. See source: "def step(self, action):"
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(14, 3, False)
(20, 3, False)
(21, 3, False)
End game! Reward: 1.0
You won :)
(21, 10, True)
End game! Reward: 1.0
You won :)
(12, 5, False)
(20, 5, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
testepi = {}
for i in range(30):
testepi[i] = generate_episode_from_limit_stochastic(env)
print(i,':',testepi[i])
###Output
0 : [((20, 7, False), 0, 1.0)]
1 : [((9, 7, False), 1, 0.0), ((17, 7, False), 1, -1.0)]
2 : [((10, 10, False), 0, 1.0)]
3 : [((12, 10, False), 1, 0.0), ((16, 10, False), 1, -1.0)]
4 : [((20, 3, False), 0, 1.0)]
5 : [((12, 10, False), 1, 0.0), ((15, 10, False), 0, -1.0)]
6 : [((18, 10, False), 1, -1.0)]
7 : [((16, 10, False), 1, -1.0)]
8 : [((21, 8, True), 0, 1.0)]
9 : [((12, 1, False), 0, -1.0)]
10 : [((12, 4, False), 1, 0.0), ((15, 4, False), 0, -1.0)]
11 : [((21, 10, True), 0, 1.0)]
12 : [((14, 1, False), 1, 0.0), ((18, 1, False), 1, 0.0), ((20, 1, False), 0, 1.0)]
13 : [((21, 10, True), 1, 0.0), ((16, 10, False), 1, 0.0), ((18, 10, False), 1, -1.0)]
14 : [((12, 3, False), 1, 0.0), ((16, 3, False), 0, -1.0)]
15 : [((20, 10, False), 1, -1.0)]
16 : [((20, 10, False), 1, 0.0), ((21, 10, False), 1, -1.0)]
17 : [((15, 10, False), 1, -1.0)]
18 : [((15, 7, False), 1, -1.0)]
19 : [((15, 4, True), 1, 0.0), ((15, 4, False), 0, 1.0)]
20 : [((14, 7, False), 1, 0.0), ((17, 7, False), 1, -1.0)]
21 : [((13, 10, True), 0, -1.0)]
22 : [((14, 10, False), 0, 1.0)]
23 : [((12, 5, False), 1, 0.0), ((14, 5, False), 0, 1.0)]
24 : [((17, 10, False), 1, -1.0)]
25 : [((20, 10, False), 0, -1.0)]
26 : [((12, 8, False), 1, 0.0), ((18, 8, False), 0, 0.0)]
27 : [((6, 10, False), 1, 0.0), ((16, 10, False), 0, -1.0)]
28 : [((12, 2, False), 0, -1.0)]
29 : [((20, 7, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
#DEBUG DEFINITIONS BEGIN
#num_episodes = 10
#generate_episode = generate_episode_from_limit_stochastic
#gamma=1.0
#DEBUG DEFINITIONS END
#while True: #DEBUG
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# generate an episode
episode = generate_episode(env)
# obtain the states, actions, and rewards
states = []
actions = []
rewards = []
for i,a in enumerate(episode):
states.append(episode[i][0])
actions.append(episode[i][1])
rewards.append(episode[i][2])
## START: Copied from solution
# prepare for discounting
# discounts = np.array([gamma**i for i in range(len(rewards)+1)]) no discounts here
if gamma == 1.0:
discounts = np.ones(len(rewards)+1)
else:
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
## STOP: Copied from solution
#break #DEBUG
#
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
#print(Q, '\n') #DEBUG
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def calc_epsilon_i(epsilon, i, epsilon_min = 0.05):
""" Returns a diminishing epsilon_i based on the given time step i with a lower limit of 0.01"""
return max( (epsilon / ((i+1)/10)), epsilon_min ) #Minimal epsilon is 0.05
def get_probs(Q_s, epsilon, nA):
""" epsilon greedy probabilities """
policy_s = np.ones(nA) * epsilon / nA
# DEBUG! print(policy_s)
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
# DEBUG! print(policy_s)
return policy_s
#for k,v in Q.items():
# print(k, '-->', v, get_probs(v, 0.9, env.action_space.n))
def generate_episode_from_Q(env, Q, epsilon, nA):
""" Generates and episode from the Q function"""
episode = []
state = env.reset()
while True:
if state in Q:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA))
else:
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(env, episode, Q, alpha, gamma):
""" as the name sais: update Q """
states, actions, rewards = zip(*episode)
# discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma = 1.0, eps_start = 1.0):
# initialize
nA = env.action_space.n
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
#calc_epsilon_i here!!!
epsilon_i = calc_epsilon_i(eps_start, i_episode)
#episode = generate_episode_from_limit_stochastic(env)
episode = generate_episode_from_Q(env, Q, calc_epsilon_i(eps_start, i_episode), nA)
Q = update_Q(env, episode, Q, alpha, gamma)
policy = dict((k,np.argmax(v)) for k,v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.1)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(12, 6, False)
End game! Reward: 1.0
You won :)
(13, 4, False)
End game! Reward: -1.0
You lost :(
(20, 3, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((10, 8, False), 1, 0.0), ((15, 8, False), 1, 0.0), ((17, 8, False), 1, -1.0)]
[((14, 3, False), 0, -1.0)]
[((7, 8, False), 1, 0.0), ((16, 8, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
# compute discounted return
G = np.zeros(len(episode)+1) # one extra for terminal state return which is always zero
for i in reversed(range(len(G[:-1]))):
_, _, r = episode[i]
G[i] = r + gamma * G[i+1]
# accumulate visitation counts and return sum
for i,step in enumerate(episode):
s, a, r = step
# if N[s][a] == 0: # first step MC results in spiky value plot
N[s][a] += 1
returns_sum[s][a] += G[i]
for state in N.keys():
Q[state] = np.nan_to_num(returns_sum[state] / N[state])
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_with_policy(bj_env, policy, epsilon):
episode = []
state = bj_env.reset()
while True:
action = policy[state] if np.random.uniform() > epsilon else bj_env.action_space.sample()
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def epsilon_scheduler(num_episodes_total, num_episodes_so_far):
epsilon = 1 - num_episodes_so_far/num_episodes_total
return epsilon if epsilon > 0.1 else 0.1
def greedify(Q, action_space):
# could have just extracted policy after episode loop, probably faster that way
policy = defaultdict(lambda: action_space.sample())
for s in Q:
greedy_action = np.argmax(Q[s])
policy[s] = greedy_action
return policy
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = epsilon_scheduler(num_episodes, i_episode)
policy = greedify(Q,env.action_space)
episode = generate_episode_with_policy(env, policy, epsilon)
# compute discounted return
G = np.zeros(len(episode)+1) # one extra for terminal state return which is always zero
for i in reversed(range(len(G[:-1]))):
_, _, r = episode[i]
G[i] = r + gamma * G[i+1]
# update q
for i, (s, a, r) in enumerate(episode):
# if N[s][a] == 0: # first step MC results in spiky value plot
Q[s][a] += alpha * (G[i] - Q[s][a])
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500_000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state, end="\t")
action = env.action_space.sample()
state, reward, done, info = env.step(action)
print("{}, {}".format(action, reward))
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 9, False) 0, 0.0
End game! Reward: 0.0
You lost :(
(13, 3, True) 1, 0
(16, 3, True) 1, 0
(17, 3, True) 0, -1.0
End game! Reward: -1.0
You lost :(
(19, 2, False) 1, -1
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((12, 6, False), 1, 0), ((13, 6, False), 0, 1.0)]
[((17, 8, False), 0, 1.0)]
[((17, 7, True), 1, 0), ((14, 7, False), 1, 0), ((18, 7, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete every-visit MC prediction function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discount = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discount[:-(1+i)])
N[state][actions[i]] +=1
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def get_prob(Q_state, epsilon, nA):
policy_s = np.ones(nA)* epsilon/nA
best_a = np.argmax(Q_state)
policy_s[best_a] = 1 - epsilon + (epsilon/nA)
return policy_s
def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon=1):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# appy a new epsilon to explore optimal policy search
epsilon = max(epsilon*0.9999, 0.05) #set epsilon
# simulate an episode
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p = get_prob(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
# apply discount for r
states, actions, rewards = zip(*episode)
discount = np.array([gamma**i for i in range(len(rewards)+1)])
# update q-table
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
# error is the actual corresponding returns that follow - old_Q estimates.
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discount[:-(i+1)]) - old_Q)
# update policy
policy = dict((k, np.argmax(v)) for k,v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 1000000, alpha = 0.002)
###Output
Episode 1000000/1000000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from typing import Dict, List
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
type(env)
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
env.action_space
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(19, 10, True)
(20, 10, True)
End game! Reward: 1.0
You won :)
(18, 9, False)
End game! Reward: 0.0
You lost :(
(6, 6, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
#(state, action, reward)
# state: (13, 10, False)
# action: 1
# reward: 0.0
###Output
[((13, 10, False), 1, 0.0), ((14, 10, False), 1, -1.0)]
[((12, 7, False), 1, 0.0), ((18, 7, False), 1, -1.0)]
[((15, 9, True), 1, 0.0), ((19, 9, True), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
steps = generate_episode_from_limit_stochastic(env)
for stepIndex in range(len(steps)):
forwardSteps = steps[stepIndex:]
forwardValue = sum([pow(gamma, i) * forwardSteps[i][2] for i in range(len(forwardSteps))])
state, action, _ = forwardSteps[0]
N[state][action] += 1
Q[state][action] += forwardValue
#print(forwardValue)
'''
for step in generate_episode_from_limit_stochastic(env):
state, action, reward = step
N[state][action] += 1
Q[state][action] += reward
'''
#print(Q[10])
#print(N[10][0].size)
for key in Q:
#for i in range(N[key].size):
# if N[key][i] == 0:
# N[key][i] = 1
#print(Q[key])
Q[key] = np.divide(Q[key], N[key])
#print(Q[key])
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
numIters = 500000
Q = mc_prediction_q(env, numIters, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_with_policy(env, Q, epsilon):
state = env.reset()
done = False
while not done:
stateQVals : np.ndarray = Q[state]
policy = np.zeros(stateQVals.size)
policy.fill(epsilon / stateQVals.size)
maxIndex = np.argmax(stateQVals)
policy[maxIndex] += 1 - epsilon
action = np.random.choice(np.arange(stateQVals.size), p = policy)
nextState, reward, done, info = env.step(action)
yield (state, action, reward)
state = nextState
def mc_control(env, num_episodes, alpha, epsilon_decay, epsilon_min, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
epsilon = 2 ** (- epsilon_decay * i_episode)
epsilon = max(epsilon, epsilon_min)
if i_episode % 1000 == 0:
avgValue = sum((np.max(q) for q in Q.values())) / len(Q)
print("\rEpisode {}/{}.\tAvg Value: {}\tEpsilon: {}".format(i_episode, num_episodes, avgValue, epsilon), end="")
sys.stdout.flush()
states, actions, rewards = zip(*generate_episode_with_policy(env, Q, epsilon))
for i in range(len(states)):
state = states[i]
action = actions[i]
totalReward = sum([reward * (gamma ** i) for j, reward in enumerate(rewards[i:])])
Q[state][action] += alpha * (totalReward - Q[state][action])
policy = {}
for key in Q:
policy[key] = np.argmax(Q[key])
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
num_episodes = 1000000
alpha = 0.001
epsilon_decay = 1/70000
epsilon_min = 0.06
policy, Q = mc_control(env, num_episodes, alpha, epsilon_decay, epsilon_min)
###Output
Episode 1000000/1000000. Avg Value: 0.041775609720589316 Epsilon: 0.0607285925363007529
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Mini Project: Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvUse the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
import gym
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
print(' ', 'hit' if action == 1 else 'stick', ' => State:', state,'R:', reward, 'Done:', done)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(12, 3, False)
stick => State: (12, 3, False) R: 1.0 Done: True
End game! Reward: 1.0
You won :)
(12, 10, False)
hit => State: (22, 10, False) R: -1 Done: True
End game! Reward: -1
You lost :(
(20, 2, False)
hit => State: (25, 2, False) R: -1 Done: True
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC Prediction: State ValuesIn this section, you will write your own implementation of MC prediction (for estimating the state-value function).We will begin by investigating a policy where the player always sticks if the sum of her cards exceeds 18. The function `generate_episode_from_limit` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit(bj_env):
episode = []
state = bj_env.reset()
while True:
action = 0 if state[0] > 18 else 1
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit` function.*)
###Code
for i in range(5):
print(generate_episode_from_limit(env))
###Output
[((13, 7, False), 1, 0), ((19, 7, False), 0, 1.0)]
[((14, 9, False), 1, -1)]
[((20, 4, True), 0, 1.0)]
[((20, 5, False), 0, 1.0)]
[((9, 8, False), 1, 0), ((20, 8, True), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `V`: This is a dictionary where `V[s]` is the estimated value of state `s`. For example, if your code returns the following output:```{(4, 7, False): -0.38775510204081631, (18, 6, False): -0.58434296365330851, (13, 2, False): -0.43409090909090908, (6, 7, False): -0.3783783783783784, ...```then the value of state `(4, 7, False)` was estimated to be `-0.38775510204081631`.If you are unfamiliar with how to use `defaultdict` in Python, you are encouraged to check out [this source](https://www.accelebrate.com/blog/using-defaultdict-python/).
###Code
from collections import defaultdict
import numpy as np
import sys
def mc_prediction_v(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionary of lists
returns = defaultdict(list)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
appeared = set()
discounts = np.array([gamma ** i for i in range(len(rewards))])
for i, state in enumerate(states):
# first visit
if not state in appeared:
appeared.add(state)
discounted_rewards = sum(rewards[i:] * discounts[:len(rewards) - i])
returns[state].append(discounted_rewards)
V = {}
for state in returns:
V[state] = np.mean(returns[state])
return V
###Output
_____no_output_____
###Markdown
Use the cell below to calculate and plot the state-value function estimate. (_The code for plotting the value function has been borrowed from [this source](https://github.com/dennybritz/reinforcement-learning/blob/master/lib/plotting.py) and slightly adapted._)To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
from plot_utils import plot_blackjack_values
# obtain the value function
V = mc_prediction_v(env, 500000, generate_episode_from_limit)
#V = mc_prediction_v(env, 4, generate_episode_from_limit)
#print(V)
# plot the value function
plot_blackjack_values(V)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC Prediction: Action ValuesIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
appeared = set()
discounts = np.array([gamma ** i for i in range(len(rewards))])
for i, state in enumerate(states):
# first visit
action = actions[i]
sa = (state, action)
if not sa in appeared:
appeared.add(sa)
returns_sum[state][action] += sum(rewards[i:] * discounts[:len(rewards) - i])
N[state][action] += 1
for state in returns_sum:
for action in range(env.action_space.n):
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
from plot_utils import plot_blackjack_values
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
#Q = mc_prediction_q(env, 5, generate_episode_from_limit_stochastic)
# obtain the state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 3: MC Control: GLIEIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
if state in Q:
# gleedy, based on Q[state] probs
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA))
else:
# initial action
action = env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def mc_control_GLIE(env, num_episodes, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionaries of arrays
Q = defaultdict(lambda: np.zeros(nA))
N = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
#epsilon = 1.0 / (num_episodes - i_episode + 1)
epsilon = 1.0/((i_episode/8000)+1)
episode = generate_episode_from_Q(env, Q, epsilon, nA)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma ** i for i in range(len(rewards))])
appeared = set()
for i, state in enumerate(states):
# first visit
action = actions[i]
sa = (state, action)
if not sa in appeared:
appeared.add(sa)
x = sum(rewards[i:] * discounts[:len(rewards) - i])
N[state][action] += 1
Q[state][action] = Q[state][action] + (x - Q[state][action]) / N[state][action]
# select max action prob from Q
policy = {state: np.argmax(actions) for state, actions in Q.items()}
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function.
###Code
# obtain the estimated optimal policy and action-value function
policy_glie, Q_glie = mc_control_GLIE(env, 500000)
#policy_glie, Q_glie = mc_control_GLIE(env, 50000)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the state-value function
V_glie = dict((k,np.max(v)) for k, v in Q_glie.items())
# plot the state-value function
plot_blackjack_values(V_glie)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
from plot_utils import plot_policy
# plot the policy
plot_policy(policy_glie)
###Output
_____no_output_____
###Markdown
The **true** optimal policy $\pi_*$ can be found on page 82 of the [textbook](http://go.udacity.com/rl-textbook) (and appears below). Compare your final estimate to the optimal policy - how close are you able to get? If you are not happy with the performance of your algorithm, take the time to tweak the decay rate of $\epsilon$ and/or run the algorithm for more episodes to attain better results.![True Optimal Policy](images/optimal.png) Part 4: MC Control: Constant-$\alpha$In this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control_alpha(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = 1.0/((i_episode/8000)+1)
episode = generate_episode_from_Q(env, Q, epsilon, nA)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma ** i for i in range(len(rewards))])
appeared = set()
for i, state in enumerate(states):
# first visit
action = actions[i]
sa = (state, action)
if not sa in appeared:
appeared.add(sa)
x = sum(rewards[i:] * discounts[:len(rewards) - i])
Q[state][action] = (1 - alpha) * Q[state][action] + alpha * x
# select max action prob from Q
policy = {state: np.argmax(actions) for state, actions in Q.items()}
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function.
###Code
# obtain the estimated optimal policy and action-value function
policy_alpha, Q_alpha = mc_control_alpha(env, 500000, 0.008)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the state-value function
V_alpha = dict((k,np.max(v)) for k, v in Q_alpha.items())
# plot the state-value function
plot_blackjack_values(V_alpha)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy_alpha)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(20, 3, False)
End game! Reward: -1
You lost :(
(15, 1, False)
End game! Reward: -1
You lost :(
(12, 1, False)
(14, 1, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((20, 6, True), 0, 1.0)]
[((12, 6, False), 0, -1.0)]
[((13, 10, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def calculate_returns(episode, gamma):
returns = []
accum = 0
for i, (_, _, reward) in enumerate(list(reversed(episode))):
accum = reward + gamma * accum
returns.append(accum)
returns.reverse()
return returns
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
# return_sum is a table that keeps track of the sum of the rewards
# obtained after first visits to each state-action pair
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
# N is a table that keeps track of the number of first visits we have made to each state-action pair
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
is_first_visit = defaultdict(lambda: [True for _ in range(env.action_space.n)])
returns = calculate_returns(episode, gamma)
for (state, action, _), return_ in zip(episode, returns):
if is_first_visit[state][action]:
is_first_visit[state][action] = False
N[state][action] += 1
returns_sum[state][action] += return_
Q[state][action] = returns_sum[state][action] / N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def to_epsilon_greedy_Q(Q, epsilon):
nA = Q.default_factory().size
epsilon_greedy_Q = defaultdict(Q.default_factory)
for k, v in Q.items():
# Greedy Action이 여러 개인 경우도 고려
greedy_v = (v >= np.max(v)).astype(np.float)
num_greedy_action = np.sum(greedy_v)
normed_greedy_action_prob = (1 - epsilon) / num_greedy_action
epsilon_greedy_v = (greedy_v * normed_greedy_action_prob) + (epsilon / nA)
epsilon_greedy_Q[k] = epsilon_greedy_v
return epsilon_greedy_Q
def generate_episode_using_Q(env, Q):
episode = []
nA = env.action_space.n
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=Q[state]) if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, generate_episode, gamma=1.0, epsilon_start=1.0, epsilon_decay=.99999, epsilon_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = epsilon_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epsilon = max(epsilon * epsilon_decay, epsilon_min)
epsilon_greedy_Q = to_epsilon_greedy_Q(Q, epsilon)
episode = generate_episode(env, epsilon_greedy_Q)
is_first_visit = defaultdict(lambda: [True for _ in range(env.action_space.n)])
returns = calculate_returns(episode, gamma)
for (state, action, _), return_ in zip(episode, returns):
if is_first_visit[state][action]:
is_first_visit[state][action] = False
Q[state][action] = Q[state][action] + alpha * (return_ - Q[state][action])
policy = dict((k, np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02, generate_episode_using_Q)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k, np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Create an instance of the BlackJack environment
###Code
blackjack_env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Take a look at the action space and observation spaceThe agent will have two actions at each state:- hit (1)- stick (0)The state has three variables:- The current sum of player's card ({1...31})- The face up value of dealer's card ({1...10}), "1" represents ace- usable ace, "0" for not usable and "1" for usable
###Code
print(blackjack_env.action_space)
print(blackjack_env.observation_space)
###Output
Discrete(2)
Tuple(Discrete(32), Discrete(11), Discrete(2))
###Markdown
Try to play some random policy
###Code
for episode in range(100):
state = blackjack_env.reset()
while True:
print(state)
action = blackjack_env.action_space.sample()
print(action)
state, reward, done, info = blackjack_env.step(action)
if done: # episode ends
print(f'Game Ends. Terminate State = {state}, Rewards = {reward}')
break
###Output
(12, 10, False)
0
Game Ends. Terminate State = (12, 10, False), Rewards = -1.0
(11, 3, False)
1
(21, 3, False)
1
Game Ends. Terminate State = (31, 3, False), Rewards = -1
(16, 1, False)
0
Game Ends. Terminate State = (16, 1, False), Rewards = -1.0
(9, 10, False)
1
(16, 10, False)
1
Game Ends. Terminate State = (22, 10, False), Rewards = -1
(6, 10, False)
1
(16, 10, False)
1
Game Ends. Terminate State = (26, 10, False), Rewards = -1
(20, 8, False)
1
Game Ends. Terminate State = (30, 8, False), Rewards = -1
(14, 7, False)
1
(20, 7, False)
1
Game Ends. Terminate State = (30, 7, False), Rewards = -1
(17, 1, False)
0
Game Ends. Terminate State = (17, 1, False), Rewards = 0.0
(15, 4, False)
0
Game Ends. Terminate State = (15, 4, False), Rewards = 1.0
(17, 10, False)
1
(18, 10, False)
0
Game Ends. Terminate State = (18, 10, False), Rewards = -1.0
(21, 2, True)
0
Game Ends. Terminate State = (21, 2, True), Rewards = 1.0
(20, 10, False)
0
Game Ends. Terminate State = (20, 10, False), Rewards = 0.0
(15, 10, False)
0
Game Ends. Terminate State = (15, 10, False), Rewards = -1.0
(14, 10, False)
0
Game Ends. Terminate State = (14, 10, False), Rewards = 1.0
(9, 2, False)
1
(17, 2, False)
0
Game Ends. Terminate State = (17, 2, False), Rewards = 1.0
(7, 7, False)
1
(18, 7, True)
1
(20, 7, True)
0
Game Ends. Terminate State = (20, 7, True), Rewards = 1.0
(18, 5, False)
0
Game Ends. Terminate State = (18, 5, False), Rewards = -1.0
(12, 1, False)
1
Game Ends. Terminate State = (22, 1, False), Rewards = -1
(10, 10, False)
1
(15, 10, False)
1
Game Ends. Terminate State = (24, 10, False), Rewards = -1
(20, 7, False)
1
Game Ends. Terminate State = (25, 7, False), Rewards = -1
(16, 6, False)
0
Game Ends. Terminate State = (16, 6, False), Rewards = -1.0
(19, 4, False)
1
Game Ends. Terminate State = (29, 4, False), Rewards = -1
(14, 10, False)
0
Game Ends. Terminate State = (14, 10, False), Rewards = -1.0
(20, 10, False)
1
Game Ends. Terminate State = (22, 10, False), Rewards = -1
(12, 4, False)
0
Game Ends. Terminate State = (12, 4, False), Rewards = -1.0
(15, 5, False)
1
(21, 5, False)
1
Game Ends. Terminate State = (24, 5, False), Rewards = -1
(10, 9, False)
0
Game Ends. Terminate State = (10, 9, False), Rewards = -1.0
(15, 10, False)
1
Game Ends. Terminate State = (25, 10, False), Rewards = -1
(15, 10, False)
1
Game Ends. Terminate State = (24, 10, False), Rewards = -1
(21, 10, True)
0
Game Ends. Terminate State = (21, 10, True), Rewards = 1.0
(11, 1, False)
0
Game Ends. Terminate State = (11, 1, False), Rewards = -1.0
(20, 8, False)
1
Game Ends. Terminate State = (30, 8, False), Rewards = -1
(18, 7, False)
0
Game Ends. Terminate State = (18, 7, False), Rewards = 1.0
(15, 10, False)
1
Game Ends. Terminate State = (25, 10, False), Rewards = -1
(15, 7, False)
1
Game Ends. Terminate State = (25, 7, False), Rewards = -1
(12, 10, False)
1
(18, 10, False)
1
(21, 10, False)
1
Game Ends. Terminate State = (29, 10, False), Rewards = -1
(16, 6, True)
0
Game Ends. Terminate State = (16, 6, True), Rewards = -1.0
(21, 7, True)
1
(14, 7, False)
0
Game Ends. Terminate State = (14, 7, False), Rewards = -1.0
(19, 10, True)
1
(20, 10, True)
1
(15, 10, False)
1
(19, 10, False)
1
Game Ends. Terminate State = (24, 10, False), Rewards = -1
(12, 3, False)
0
Game Ends. Terminate State = (12, 3, False), Rewards = -1.0
(8, 8, False)
1
(18, 8, False)
0
Game Ends. Terminate State = (18, 8, False), Rewards = 1.0
(13, 8, False)
0
Game Ends. Terminate State = (13, 8, False), Rewards = 1.0
(19, 4, False)
1
Game Ends. Terminate State = (27, 4, False), Rewards = -1
(12, 9, False)
1
(15, 9, False)
0
Game Ends. Terminate State = (15, 9, False), Rewards = 1.0
(20, 9, False)
1
Game Ends. Terminate State = (28, 9, False), Rewards = -1
(20, 10, False)
0
Game Ends. Terminate State = (20, 10, False), Rewards = 1.0
(20, 5, False)
1
Game Ends. Terminate State = (28, 5, False), Rewards = -1
(20, 3, False)
0
Game Ends. Terminate State = (20, 3, False), Rewards = 1.0
(18, 10, False)
0
Game Ends. Terminate State = (18, 10, False), Rewards = 1.0
(11, 10, False)
0
Game Ends. Terminate State = (11, 10, False), Rewards = -1.0
(10, 7, False)
0
Game Ends. Terminate State = (10, 7, False), Rewards = -1.0
(13, 7, False)
0
Game Ends. Terminate State = (13, 7, False), Rewards = 1.0
(13, 3, False)
1
Game Ends. Terminate State = (23, 3, False), Rewards = -1
(17, 7, False)
1
Game Ends. Terminate State = (22, 7, False), Rewards = -1
(13, 10, False)
0
Game Ends. Terminate State = (13, 10, False), Rewards = 1.0
(14, 10, False)
0
Game Ends. Terminate State = (14, 10, False), Rewards = -1.0
(9, 9, False)
0
Game Ends. Terminate State = (9, 9, False), Rewards = -1.0
(8, 4, False)
1
(13, 4, False)
1
(20, 4, False)
0
Game Ends. Terminate State = (20, 4, False), Rewards = 1.0
(17, 10, False)
1
(18, 10, False)
0
Game Ends. Terminate State = (18, 10, False), Rewards = -1.0
(16, 10, False)
0
Game Ends. Terminate State = (16, 10, False), Rewards = -1.0
(10, 10, False)
1
(16, 10, False)
0
Game Ends. Terminate State = (16, 10, False), Rewards = -1.0
(12, 10, False)
1
(19, 10, False)
1
Game Ends. Terminate State = (28, 10, False), Rewards = -1
(16, 10, False)
1
(19, 10, False)
1
Game Ends. Terminate State = (28, 10, False), Rewards = -1
(15, 10, False)
1
(20, 10, False)
1
Game Ends. Terminate State = (28, 10, False), Rewards = -1
(12, 10, False)
0
Game Ends. Terminate State = (12, 10, False), Rewards = -1.0
(20, 1, False)
1
(21, 1, False)
1
Game Ends. Terminate State = (23, 1, False), Rewards = -1
(11, 10, False)
0
Game Ends. Terminate State = (11, 10, False), Rewards = -1.0
(14, 8, False)
0
Game Ends. Terminate State = (14, 8, False), Rewards = 1.0
(19, 7, False)
1
Game Ends. Terminate State = (28, 7, False), Rewards = -1
(7, 2, False)
0
Game Ends. Terminate State = (7, 2, False), Rewards = -1.0
(15, 9, False)
0
Game Ends. Terminate State = (15, 9, False), Rewards = -1.0
(17, 5, False)
1
Game Ends. Terminate State = (24, 5, False), Rewards = -1
(5, 9, False)
1
(15, 9, False)
0
Game Ends. Terminate State = (15, 9, False), Rewards = -1.0
(10, 7, False)
1
(16, 7, False)
0
Game Ends. Terminate State = (16, 7, False), Rewards = 1.0
(12, 8, False)
0
Game Ends. Terminate State = (12, 8, False), Rewards = -1.0
(16, 8, False)
1
Game Ends. Terminate State = (26, 8, False), Rewards = -1
(13, 10, False)
0
Game Ends. Terminate State = (13, 10, False), Rewards = -1.0
(11, 9, False)
0
Game Ends. Terminate State = (11, 9, False), Rewards = -1.0
(15, 3, False)
0
Game Ends. Terminate State = (15, 3, False), Rewards = -1.0
(21, 7, True)
1
(17, 7, False)
1
Game Ends. Terminate State = (22, 7, False), Rewards = -1
(21, 8, True)
0
Game Ends. Terminate State = (21, 8, True), Rewards = 0.0
(20, 10, False)
1
Game Ends. Terminate State = (28, 10, False), Rewards = -1
(5, 1, False)
0
Game Ends. Terminate State = (5, 1, False), Rewards = -1.0
(13, 10, True)
0
Game Ends. Terminate State = (13, 10, True), Rewards = 1.0
(19, 10, False)
0
Game Ends. Terminate State = (19, 10, False), Rewards = -1.0
(21, 10, True)
0
Game Ends. Terminate State = (21, 10, True), Rewards = 1.0
(19, 10, False)
0
Game Ends. Terminate State = (19, 10, False), Rewards = 1.0
(15, 3, False)
1
Game Ends. Terminate State = (25, 3, False), Rewards = -1
(11, 1, False)
0
Game Ends. Terminate State = (11, 1, False), Rewards = -1.0
(15, 6, True)
1
(14, 6, False)
0
Game Ends. Terminate State = (14, 6, False), Rewards = 1.0
(7, 1, False)
1
(17, 1, False)
1
Game Ends. Terminate State = (27, 1, False), Rewards = -1
(8, 1, False)
1
(12, 1, False)
1
(17, 1, False)
1
Game Ends. Terminate State = (27, 1, False), Rewards = -1
(15, 9, False)
0
Game Ends. Terminate State = (15, 9, False), Rewards = 1.0
(20, 2, False)
1
Game Ends. Terminate State = (30, 2, False), Rewards = -1
(20, 1, False)
1
Game Ends. Terminate State = (30, 1, False), Rewards = -1
(16, 4, True)
1
(12, 4, False)
1
(19, 4, False)
0
Game Ends. Terminate State = (19, 4, False), Rewards = 0.0
(16, 3, True)
1
(14, 3, False)
1
Game Ends. Terminate State = (24, 3, False), Rewards = -1
(11, 1, False)
0
Game Ends. Terminate State = (11, 1, False), Rewards = -1.0
(9, 10, False)
0
Game Ends. Terminate State = (9, 10, False), Rewards = -1.0
(21, 2, True)
1
(21, 2, False)
0
Game Ends. Terminate State = (21, 2, False), Rewards = 1.0
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
hit_stick_prob = [0.2, 0.8] if state[0] > 18 else [0.8, 0.2]
action = np.random.choice([1, 0], p=hit_stick_prob)
new_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = new_state
if done:
break
return episode
for _ in range(3):
print(generate_episode_from_limit_stochastic(blackjack_env))
###Output
[((14, 9, False), 0, -1.0)]
[((12, 2, False), 1, -1)]
[((17, 9, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`. First-visit MC prediction
###Code
def first_visit_mc_prediction(env, num_episodes, generate_episode, gamma=1):
q = defaultdict(lambda: [0] * env.action_space.n)
visit_count = defaultdict(lambda: [0] * env.action_space.n)
for _ in range(num_episodes):
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = [gamma ** i for i in range(len(rewards))]
visited_spaces = set()
for i, state in enumerate(states):
if (state, actions[i]) in visited_spaces:
continue
discounted_return = 0
for j, reward in enumerate(rewards[i:]):
discounted_return += reward * discounts[j]
current_q = q[state][actions[i]]
current_count = visit_count[state][actions[i]]
new_q = (current_q * current_count + discounted_return) / (current_count + 1)
q[state][actions[i]] = new_q
visit_count[state][actions[i]] += 1
visited_spaces.add((state, actions[i]))
return q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.
###Code
# obtain the action-value function
Q = first_visit_mc_prediction(blackjack_env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Every-visit MC prediction
###Code
def every_visit_mc_prediction(env, num_episodes, generate_episode, gamma=1):
q = defaultdict(lambda: [0] * env.action_space.n) # list = [action_space]
visit_count = defaultdict(lambda: [0] * env.action_space.n) # list = [action_space]
for _ in range(num_episodes):
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = [gamma ** i for i in range(len(rewards))]
for i, state in enumerate(states):
# calculate discounted reward
discounted_return = 0
for j, reward in enumerate(rewards[i:]):
discounted_return += reward * (discounts[j])
# update q
current_q = q[state][actions[i]]
current_count = visit_count[state][actions[i]]
new_q = (current_q * current_count + discounted_return) / (current_count + 1)
q[state][actions[i]] = new_q
visit_count[state][actions[i]] += 1
return q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.
###Code
# obtain the action-value function
Q = every_visit_mc_prediction(blackjack_env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.
###Code
def contant_alpha_mc_control(env, num_episodes, alpha, gamma=1):
q = defaultdict(lambda: [0] * env.action_space.n)
policy = defaultdict(int)
for e in range(num_episodes):
print(f'\rEpisode {e+1}/{num_episodes}.', end='')
episode = run_episode_using_epsilon_greedy(env, q)
states, actions, rewards = zip(*episode)
discounts = [gamma ** i for i in range(len(rewards))]
for i, state in enumerate(states):
policy[state] = actions[i] # update the policy dictionary
G = 0
for j, reward in enumerate(rewards[i:]):
G += reward * discounts[j]
q[state][actions[i]] += alpha * (G - q[state][actions[i]])
return q, policy
def run_episode_using_epsilon_greedy(env, q, epsilon=0.1):
episode = []
state = env.reset()
while True:
action = epsilon_greedy(q, state, epsilon)
new_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = new_state
if done:
return episode
def epsilon_greedy(q, state, epsilon):
"""
pick action using epsilon-greedy policy
"""
q_actions, num_actions = q[state], len(q[state])
p_greedy = 1 - epsilon + epsilon / num_actions
p_random = epsilon / num_actions
pick_greedy = np.random.choice([True, False], p=[p_greedy, p_random])
if pick_greedy:
argmax = np.argwhere(q_actions == np.max(q_actions)).flatten().tolist()
action = np.random.choice(argmax)
else:
action = np.random.choice(list(range(num_actions)))
return action
# obtain the estimated optimal policy and action-value function
Q, policy = contant_alpha_mc_control(blackjack_env, 500000, 0.01)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
#print(state, reward, done, info)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(19, 10, False)
End game! Reward: -1
You lost :(
(11, 10, False)
(20, 10, False)
End game! Reward: 1.0
You won :)
(11, 7, False)
(21, 7, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((12, 7, False), 0, -1.0)]
[((10, 8, False), 1, 0), ((16, 8, False), 0, -1.0)]
[((16, 10, False), 1, 0), ((19, 10, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate an episode
episode = generate_episode(env)
# obtain the states, actions and rewards
states, actions, rewards = zip(*episode)
# prepare the discounting
discounts = np.array([gamma**i for i in range (len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(14, 7, False)
End game! Reward: 1.0
You won :)
(15, 9, False)
End game! Reward: -1.0
You lost :(
(16, 10, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((14, 7, True), 1, 0.0), ((19, 7, True), 0, 1.0)]
[((19, 3, False), 0, 1.0)]
[((14, 8, True), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
#Running on episode
output = generate_episode(env)
S, A, R = zip(*output)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(R)+1)])
for i, s in enumerate( S ):
a = A[i]
r = R[i]
N[s][a] += 1.0
returns_sum[s][a] += sum( R[i:]*discounts[:-(1+i)] )
Q[s][a] = returns_sum[s][a]/N[s][a]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_policy( env, Q, epsilon, nA ):
episode = []
state = env.reset()
while True:
if state in Q:
action_max_reward = np.argmax( Q[state] ) #This is the action whield yield the max reward for this episode
#Creating an equal prob of ( 1.0 - epsilon ) among all posible actions that were not the best
policy_probs = np.ones(nA) * epsilon / nA
#For the best, the prob is the complementary of the above
policy_probs[ action_max_reward ] = 1.0 + epsilon*(1.0-nA)/float(nA)
#Get one action based on it's probability.
action = np.random.choice( np.arange(nA), p=policy_probs )
else:
action = env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05 ):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
#Running on episode
output = generate_episode_policy(env, Q, epsilon, nA )
S, A, R = zip(*output)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(R)+1)])
for i, s in enumerate( S ):
a = A[i]
old_q = Q[s][a]
reward = sum(R[i:]*discounts[:-(1+i)])
Q[s][a] = alpha*old_q + ( 1.0 - alpha )*( reward )
#for k, v in Q.items():
# idx = np.argmax( v )
# policy[k[idx]] = Q[k][idx]
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.98 )
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
print(env.action_space.n)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
2
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(18, 3, False)
End game! Reward: -1.0
You lost :(
(12, 2, False)
(19, 2, False)
End game! Reward: -1
You lost :(
(9, 10, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
episode = generate_episode_from_limit_stochastic(env)
print('episode={}'.format(episode))
(states, actions, rewards) = zip(*episode)
print('states={}'.format(states))
print('actions={}'.format(actions))
print('rewards={}'.format(rewards))
gamma = 0.9
discount = np.array([gamma**i for i in range(len(actions)+1)])
print('discount={}'.format(discount))
discount = np.array([gamma**i for i in range(len(rewards)+1)])
i = 0
print(rewards[i:] * discount[:-(1+i)])
print(discount[:-(1+i)])
###Output
episode=[((15, 8, False), 1, -1)]
states=((15, 8, False),)
actions=(1,)
rewards=(-1,)
discount=[ 1. 0.9]
[-1.]
[ 1.]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# generate an episode
episode = generate_episode(env)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA))\
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards) +1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha * (sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(epsilon * eps_decay, eps_min)
episode = generate_episode_from_Q(env, Q, epsilon, nA)
Q = update_Q(env, episode, Q, alpha, gamma)
policy = dict((k, np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
[2018-07-19 21:35:31,902] Making new env: Blackjack-v0
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(10):
state = env.reset()
while True:
action = env.action_space.sample()
print(state, "Action:", action)
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(7, 1, False) Action: 0
End game! Reward: -1.0
You lost :(
(11, 5, False) Action: 1
(21, 5, False) Action: 1
End game! Reward: -1
You lost :(
(19, 8, False) Action: 0
End game! Reward: -1.0
You lost :(
(14, 1, False) Action: 1
(20, 1, False) Action: 1
End game! Reward: -1
You lost :(
(9, 10, False) Action: 1
(12, 10, False) Action: 1
(18, 10, False) Action: 1
End game! Reward: -1
You lost :(
(21, 10, True) Action: 1
(17, 10, False) Action: 1
End game! Reward: -1
You lost :(
(15, 4, False) Action: 0
End game! Reward: -1.0
You lost :(
(15, 6, False) Action: 0
End game! Reward: -1.0
You lost :(
(20, 10, True) Action: 1
(14, 10, False) Action: 0
End game! Reward: -1.0
You lost :(
(13, 7, False) Action: 0
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(10):
print(generate_episode_from_limit_stochastic(env))
episode = generate_episode_from_limit_stochastic(env)
print(episode)
states, actions, rewards = zip(*episode) #unzip episode
print(states)
print(actions)
print(rewards)
###Output
[((14, 10, False), 1, -1)]
((14, 10, False),)
(1,)
(-1,)
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode) #unzip episode
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
if state in Q:
probs = get_probs(Q[state], epsilon, nA)
action = np.random.choice(np.arange(2), p=probs)
else:
action = env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
probs = np.ones(nA)*epsilon/nA
best_a = np.argmax(Q_s)
probs[best_a] = 1-epsilon+epsilon/nA
return probs
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
#print("Q{}, {}".format(state,actions[i]))
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=0.99999, eps_min=0.1):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(epsilon*eps_decay, eps_min)
episode=generate_episode_from_Q(env, Q, epsilon, nA)
#print(episode)
Q = update_Q(env, episode, Q, alpha, gamma)
policy = dict()
for k, v in Q.items():
policy[k]=np.argmax(v)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.01)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
print("hit") if action > 0 else print("stick")
state, reward, done, info = env.step(action)
if done:
# print('state: ', state)
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(9, 9, False)
hit
(11, 9, False)
hit
(16, 9, False)
hit
End game! Reward: -1
You lost :(
(18, 10, False)
stick
End game! Reward: -1.0
You lost :(
(18, 7, False)
hit
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((21, 3, True), 1, 0), ((18, 3, False), 1, -1)]
[((12, 10, False), 0, -1.0)]
[((15, 3, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
# sum of reward for each state action pair
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
# N array in state action space count of visits
N = defaultdict(lambda: np.zeros(env.action_space.n))
# Q value
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# generate episode
episode = generate_episode(env)
# get states actions and reward
states, actions, rewards = zip(*episode)
# compute discount factor for reward
discount = np.array([gamma**i for i in range(len(rewards)+1)])
# print(discount)
# iterate over states in episode
for i, state in enumerate(states):
# add cumulative reward with discount to returns sum for pair state action
# print(rewards)
# print(discount)
# cumul_reward = rewards[i:]*discount[:-(1+i)]
returns_sum[state][actions[i]] += sum(rewards[i:]*discount[:-(1+i)])
# sum count of visits to pair state action
N[state][actions[i]] += 1
# compute Q value for state-action by dividing retuns_sum by N
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
Q = mc_prediction_q(env, 5, generate_episode_from_limit_stochastic)
def mc_prediction_q_(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate an episode
episode = generate_episode(env)
print("episode {}".format(i_episode))
print(episode)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
print(f"states: {states}")
print(f"actions: {actions}")
print(f"rewards: {rewards}")
# prepare for discounting
# compute discount factor for each reward
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
print(f"discounts: {discounts}")
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
# obtain the action-value function
Q = mc_prediction_q(env, 5, generate_episode_from_limit_stochastic)
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def get_probs_Q(Q_s, epsilon, nA):
# policy is probability of action according to epsilin and Q values of the state
# create policy with probabilities epsilon / nA
policy_s = np.ones(nA) * epsilon / nA
# find max value in Q_s
b_action = np.argmax(Q_s)
# add best action probability 1 - epsilon
policy_s[b_action] = 1 - epsilon + (epsilon / nA)
return policy_s
def get_episode_by_Q(bj_env, Q, nA, epsilon):
episode = []
state = bj_env.reset()
while True:
if state in Q:
probs = get_probs_Q(Q[state], epsilon, nA)
action = np.random.choice(np.arange(nA), p=probs)
else:
action = bj_env.action_space.sample()
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(Q, episode, alpha, gamma):
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
G_t = sum(rewards[i:]*discounts[:-(1+i)])
delta = G_t - Q[state][actions[i]]
Q[state][actions[i]] = Q[state][actions[i]] + alpha * delta
# if state and actions[i] in Q:
# G_t = sum(rewards[i:]*discounts[:-(1+i)])
# delta = G_t - Q[state][actions[i]]
# Q[state][actions[i]] = Q[state][actions[i]] + alpha * delta
# else:
# Q[state][actions[i]] = sum(rewards[i:]*discounts[:-(1+i)])
# old_Q = Q[state][actions[i]]
# Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def get_episode_by_Q(env, Q, nA, epsilon):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs_Q(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(Q, episode, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epsilon = max(epsilon*eps_decay, eps_min)
## TODO: complete the function
# generate episode by greed policy according to actual Q table
# if there is not state-action in Q table use random action
# get new states, actions, rewards
# get discout factor for rewards
# update Q table
episode = get_episode_by_Q(env, Q, nA, epsilon)
Q = update_Q(Q, episode, alpha, gamma)
# generate policy
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 1000000, 0.02)
###Output
Episode 1000000/1000000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
import pprint
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
/Users/rotobi/anaconda3/envs/deep_rl/lib/python3.6/site-packages/gym/envs/registration.py:14: PkgResourcesDeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately.
result = entry_point.load(False)
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
action = env.action_space.sample()
print('s:', state, '\ta:', action)
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(12, 10, False)
End game! Reward: -1
You lost :(
(16, 10, False)
End game! Reward: -1
You lost :(
(11, 8, False)
(16, 8, False)
(20, 8, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(10):
print(generate_episode_from_limit_stochastic(env))
###Output
[((18, 10, False), 0, 1.0)]
[((15, 2, False), 1, -1)]
[((12, 10, True), 1, 0), ((20, 10, True), 0, 0.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def monitor_progress(i_episode, num_episodes):
if i_episode % (num_episodes/20) == 0:
print("\rEpisode {}/{}.\n".format(i_episode, num_episodes), end="")
sys.stdout.flush()
def is_first_visit(state_t, action_t, episode):
# In Blackjack, we always have first visits within episodes
return True
def mc_prediction_q(env, num_episodes, generate_episode, gamma=.9):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
monitor_progress(i_episode, num_episodes)
episode = generate_episode(env)
_, _, episode_reward = episode[-1]
# print("episode={}, len={}, episode_reward={}".format(i_episode,
# len(episode),
# episode_reward))
T = len(episode)-1 # Final time T
for t, step in enumerate(episode):
state_t, action_t, reward_t = step
# Expected return G_t = R_{t+1} + gamma*R{t+2} + gamma^2*R_{t+3} + ...
# For black jack only the very last t gives us a reward
G_t = episode_reward * gamma**(T-t)
# print('\tt={}: s={}, a={}, r={}; G_t={}, T={}'.format(t, state_t, action_t, reward_t, G_t, T))
if is_first_visit(state_t, action_t, episode):
N[state_t][action_t] += 1
returns_sum[state_t][action_t] += G_t # Expected return
Q[state_t][action_t] = returns_sum[state_t][action_t] / N[state_t][action_t]
print("len(N):", len(N))
pprint.pprint(N)
print("len(returns_sum):", len(returns_sum))
pprint.pprint(returns_sum)
print("len(Q):", len(Q))
pprint.pprint(Q)
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 25000/500000.
Episode 50000/500000.
Episode 75000/500000.
Episode 100000/500000.
Episode 125000/500000.
Episode 150000/500000.
Episode 175000/500000.
Episode 200000/500000.
Episode 225000/500000.
Episode 250000/500000.
Episode 275000/500000.
Episode 300000/500000.
Episode 325000/500000.
Episode 350000/500000.
Episode 375000/500000.
Episode 400000/500000.
Episode 425000/500000.
Episode 450000/500000.
Episode 475000/500000.
Episode 500000/500000.
len(N): 280
defaultdict(<function mc_prediction_q.<locals>.<lambda> at 0x1a167f68c8>,
{(4, 1, False): array([ 58., 175.]),
(4, 2, False): array([ 41., 184.]),
(4, 3, False): array([ 40., 151.]),
(4, 4, False): array([ 57., 185.]),
(4, 5, False): array([ 31., 178.]),
(4, 6, False): array([ 50., 176.]),
(4, 7, False): array([ 44., 185.]),
(4, 8, False): array([ 48., 185.]),
(4, 9, False): array([ 45., 164.]),
(4, 10, False): array([166., 764.]),
(5, 1, False): array([ 81., 388.]),
(5, 2, False): array([104., 343.]),
(5, 3, False): array([ 80., 370.]),
(5, 4, False): array([106., 324.]),
(5, 5, False): array([ 88., 365.]),
(5, 6, False): array([104., 374.]),
(5, 7, False): array([ 92., 353.]),
(5, 8, False): array([102., 365.]),
(5, 9, False): array([ 93., 312.]),
(5, 10, False): array([ 383., 1468.]),
(6, 1, False): array([136., 558.]),
(6, 2, False): array([132., 558.]),
(6, 3, False): array([138., 583.]),
(6, 4, False): array([115., 532.]),
(6, 5, False): array([120., 520.]),
(6, 6, False): array([145., 611.]),
(6, 7, False): array([152., 538.]),
(6, 8, False): array([142., 545.]),
(6, 9, False): array([133., 601.]),
(6, 10, False): array([ 578., 2224.]),
(7, 1, False): array([205., 771.]),
(7, 2, False): array([217., 766.]),
(7, 3, False): array([183., 750.]),
(7, 4, False): array([193., 754.]),
(7, 5, False): array([177., 770.]),
(7, 6, False): array([183., 746.]),
(7, 7, False): array([220., 784.]),
(7, 8, False): array([196., 763.]),
(7, 9, False): array([186., 757.]),
(7, 10, False): array([ 772., 2976.]),
(8, 1, False): array([235., 961.]),
(8, 2, False): array([251., 997.]),
(8, 3, False): array([242., 926.]),
(8, 4, False): array([229., 980.]),
(8, 5, False): array([242., 960.]),
(8, 6, False): array([ 220., 1044.]),
(8, 7, False): array([248., 984.]),
(8, 8, False): array([256., 998.]),
(8, 9, False): array([ 247., 1020.]),
(8, 10, False): array([ 959., 3887.]),
(9, 1, False): array([ 323., 1137.]),
(9, 2, False): array([ 311., 1177.]),
(9, 3, False): array([ 277., 1157.]),
(9, 4, False): array([ 323., 1227.]),
(9, 5, False): array([ 302., 1241.]),
(9, 6, False): array([ 314., 1226.]),
(9, 7, False): array([ 315., 1174.]),
(9, 8, False): array([ 295., 1243.]),
(9, 9, False): array([ 296., 1181.]),
(9, 10, False): array([1214., 4867.]),
(10, 1, False): array([ 395., 1430.]),
(10, 2, False): array([ 370., 1423.]),
(10, 3, False): array([ 370., 1449.]),
(10, 4, False): array([ 385., 1475.]),
(10, 5, False): array([ 349., 1421.]),
(10, 6, False): array([ 358., 1378.]),
(10, 7, False): array([ 371., 1435.]),
(10, 8, False): array([ 334., 1449.]),
(10, 9, False): array([ 361., 1427.]),
(10, 10, False): array([1476., 5776.]),
(11, 1, False): array([ 426., 1762.]),
(11, 2, False): array([ 426., 1764.]),
(11, 3, False): array([ 428., 1727.]),
(11, 4, False): array([ 443., 1670.]),
(11, 5, False): array([ 409., 1688.]),
(11, 6, False): array([ 480., 1752.]),
(11, 7, False): array([ 381., 1708.]),
(11, 8, False): array([ 420., 1654.]),
(11, 9, False): array([ 406., 1651.]),
(11, 10, False): array([1705., 6789.]),
(12, 1, False): array([ 880., 3407.]),
(12, 1, True): array([ 41., 168.]),
(12, 2, False): array([ 839., 3448.]),
(12, 2, True): array([ 47., 225.]),
(12, 3, False): array([ 856., 3432.]),
(12, 3, True): array([ 48., 191.]),
(12, 4, False): array([ 830., 3392.]),
(12, 4, True): array([ 48., 178.]),
(12, 5, False): array([ 861., 3389.]),
(12, 5, True): array([ 52., 171.]),
(12, 6, False): array([ 814., 3405.]),
(12, 6, True): array([ 42., 179.]),
(12, 7, False): array([ 912., 3448.]),
(12, 7, True): array([ 45., 195.]),
(12, 8, False): array([ 896., 3427.]),
(12, 8, True): array([ 49., 176.]),
(12, 9, False): array([ 864., 3370.]),
(12, 9, True): array([ 47., 199.]),
(12, 10, False): array([ 3294., 13784.]),
(12, 10, True): array([194., 721.]),
(13, 1, False): array([ 844., 3523.]),
(13, 1, True): array([ 84., 369.]),
(13, 2, False): array([ 853., 3582.]),
(13, 2, True): array([ 93., 427.]),
(13, 3, False): array([ 827., 3478.]),
(13, 3, True): array([124., 378.]),
(13, 4, False): array([ 870., 3501.]),
(13, 4, True): array([111., 397.]),
(13, 5, False): array([ 851., 3431.]),
(13, 5, True): array([ 83., 372.]),
(13, 6, False): array([ 897., 3359.]),
(13, 6, True): array([ 95., 345.]),
(13, 7, False): array([ 889., 3477.]),
(13, 7, True): array([ 94., 374.]),
(13, 8, False): array([ 892., 3538.]),
(13, 8, True): array([ 92., 388.]),
(13, 9, False): array([ 901., 3502.]),
(13, 9, True): array([ 88., 386.]),
(13, 10, False): array([ 3442., 14074.]),
(13, 10, True): array([ 375., 1523.]),
(14, 1, False): array([ 901., 3574.]),
(14, 1, True): array([ 96., 370.]),
(14, 2, False): array([ 895., 3620.]),
(14, 2, True): array([ 83., 404.]),
(14, 3, False): array([ 829., 3459.]),
(14, 3, True): array([ 96., 426.]),
(14, 4, False): array([ 913., 3555.]),
(14, 4, True): array([108., 358.]),
(14, 5, False): array([ 861., 3518.]),
(14, 5, True): array([ 79., 402.]),
(14, 6, False): array([ 893., 3471.]),
(14, 6, True): array([111., 402.]),
(14, 7, False): array([ 891., 3473.]),
(14, 7, True): array([108., 388.]),
(14, 8, False): array([ 894., 3581.]),
(14, 8, True): array([ 97., 408.]),
(14, 9, False): array([ 866., 3432.]),
(14, 9, True): array([ 85., 430.]),
(14, 10, False): array([ 3500., 14168.]),
(14, 10, True): array([ 381., 1602.]),
(15, 1, False): array([ 898., 3676.]),
(15, 1, True): array([ 87., 447.]),
(15, 2, False): array([ 899., 3482.]),
(15, 2, True): array([129., 450.]),
(15, 3, False): array([ 932., 3657.]),
(15, 3, True): array([109., 426.]),
(15, 4, False): array([ 867., 3597.]),
(15, 4, True): array([ 97., 412.]),
(15, 5, False): array([ 953., 3607.]),
(15, 5, True): array([ 99., 439.]),
(15, 6, False): array([ 906., 3591.]),
(15, 6, True): array([116., 479.]),
(15, 7, False): array([ 897., 3538.]),
(15, 7, True): array([108., 445.]),
(15, 8, False): array([ 912., 3584.]),
(15, 8, True): array([113., 421.]),
(15, 9, False): array([ 910., 3529.]),
(15, 9, True): array([100., 437.]),
(15, 10, False): array([ 3592., 14456.]),
(15, 10, True): array([ 451., 1767.]),
(16, 1, False): array([ 873., 3598.]),
(16, 1, True): array([129., 475.]),
(16, 2, False): array([ 896., 3605.]),
(16, 2, True): array([116., 473.]),
(16, 3, False): array([ 971., 3613.]),
(16, 3, True): array([106., 483.]),
(16, 4, False): array([ 914., 3585.]),
(16, 4, True): array([121., 454.]),
(16, 5, False): array([ 904., 3555.]),
(16, 5, True): array([107., 523.]),
(16, 6, False): array([ 965., 3745.]),
(16, 6, True): array([106., 488.]),
(16, 7, False): array([ 887., 3553.]),
(16, 7, True): array([118., 456.]),
(16, 8, False): array([ 945., 3529.]),
(16, 8, True): array([121., 484.]),
(16, 9, False): array([ 930., 3583.]),
(16, 9, True): array([117., 473.]),
(16, 10, False): array([ 3631., 14523.]),
(16, 10, True): array([ 494., 1941.]),
(17, 1, False): array([ 936., 3572.]),
(17, 1, True): array([126., 546.]),
(17, 2, False): array([ 879., 3653.]),
(17, 2, True): array([132., 511.]),
(17, 3, False): array([ 959., 3645.]),
(17, 3, True): array([133., 504.]),
(17, 4, False): array([ 905., 3692.]),
(17, 4, True): array([133., 485.]),
(17, 5, False): array([ 931., 3544.]),
(17, 5, True): array([125., 510.]),
(17, 6, False): array([ 931., 3640.]),
(17, 6, True): array([139., 498.]),
(17, 7, False): array([ 912., 3733.]),
(17, 7, True): array([151., 497.]),
(17, 8, False): array([ 859., 3545.]),
(17, 8, True): array([136., 515.]),
(17, 9, False): array([ 926., 3645.]),
(17, 9, True): array([142., 518.]),
(17, 10, False): array([ 3714., 14641.]),
(17, 10, True): array([ 509., 2123.]),
(18, 1, False): array([ 933., 3553.]),
(18, 1, True): array([153., 536.]),
(18, 2, False): array([ 906., 3640.]),
(18, 2, True): array([128., 565.]),
(18, 3, False): array([ 925., 3806.]),
(18, 3, True): array([144., 561.]),
(18, 4, False): array([ 911., 3604.]),
(18, 4, True): array([115., 591.]),
(18, 5, False): array([ 901., 3532.]),
(18, 5, True): array([138., 551.]),
(18, 6, False): array([ 933., 3598.]),
(18, 6, True): array([140., 565.]),
(18, 7, False): array([ 859., 3651.]),
(18, 7, True): array([149., 565.]),
(18, 8, False): array([ 917., 3672.]),
(18, 8, True): array([154., 558.]),
(18, 9, False): array([ 833., 3642.]),
(18, 9, True): array([110., 551.]),
(18, 10, False): array([ 3706., 14519.]),
(18, 10, True): array([ 548., 2182.]),
(19, 1, False): array([3517., 893.]),
(19, 1, True): array([623., 158.]),
(19, 2, False): array([3569., 887.]),
(19, 2, True): array([633., 157.]),
(19, 3, False): array([3541., 872.]),
(19, 3, True): array([574., 140.]),
(19, 4, False): array([3575., 918.]),
(19, 4, True): array([551., 154.]),
(19, 5, False): array([3504., 903.]),
(19, 5, True): array([622., 138.]),
(19, 6, False): array([3543., 877.]),
(19, 6, True): array([606., 146.]),
(19, 7, False): array([3502., 917.]),
(19, 7, True): array([613., 157.]),
(19, 8, False): array([3615., 898.]),
(19, 8, True): array([603., 139.]),
(19, 9, False): array([3396., 892.]),
(19, 9, True): array([583., 172.]),
(19, 10, False): array([14154., 3481.]),
(19, 10, True): array([2374., 606.]),
(20, 1, False): array([5067., 1294.]),
(20, 1, True): array([643., 180.]),
(20, 2, False): array([4935., 1200.]),
(20, 2, True): array([655., 149.]),
(20, 3, False): array([5181., 1327.]),
(20, 3, True): array([677., 134.]),
(20, 4, False): array([5083., 1286.]),
(20, 4, True): array([658., 176.]),
(20, 5, False): array([5027., 1194.]),
(20, 5, True): array([592., 157.]),
(20, 6, False): array([4935., 1265.]),
(20, 6, True): array([655., 130.]),
(20, 7, False): array([5012., 1245.]),
(20, 7, True): array([690., 132.]),
(20, 8, False): array([5193., 1257.]),
(20, 8, True): array([632., 153.]),
(20, 9, False): array([4958., 1282.]),
(20, 9, True): array([609., 146.]),
(20, 10, False): array([20180., 4904.]),
(20, 10, True): array([2360., 632.]),
(21, 1, False): array([2189., 548.]),
(21, 1, True): array([1713., 464.]),
(21, 2, False): array([2199., 573.]),
(21, 2, True): array([1700., 442.]),
(21, 3, False): array([2195., 527.]),
(21, 3, True): array([1750., 452.]),
(21, 4, False): array([2160., 544.]),
(21, 4, True): array([1763., 434.]),
(21, 5, False): array([2223., 552.]),
(21, 5, True): array([1686., 447.]),
(21, 6, False): array([2185., 552.]),
(21, 6, True): array([1766., 437.]),
(21, 7, False): array([2189., 565.]),
(21, 7, True): array([1782., 444.]),
(21, 8, False): array([2229., 510.]),
(21, 8, True): array([1749., 414.]),
(21, 9, False): array([2091., 510.]),
(21, 9, True): array([1745., 403.]),
(21, 10, False): array([8720., 2163.]),
(21, 10, True): array([7044., 1771.])})
len(returns_sum): 280
defaultdict(<function mc_prediction_q.<locals>.<lambda> at 0x1a165b8f28>,
{(4, 1, False): array([ -50. , -105.255]),
(4, 2, False): array([-17. , -46.6587]),
(4, 3, False): array([ -4. , -42.12549]),
(4, 4, False): array([ -7. , -48.2508]),
(4, 5, False): array([ -1. , -49.4154]),
(4, 6, False): array([-14. , -55.7568]),
(4, 7, False): array([-22. , -72.66951]),
(4, 8, False): array([-26. , -78.2982]),
(4, 9, False): array([-25. , -67.9491]),
(4, 10, False): array([ -98. , -339.1155]),
(5, 1, False): array([ -61. , -208.22679]),
(5, 2, False): array([-44. , -97.02531]),
(5, 3, False): array([-18. , -99.92502]),
(5, 4, False): array([-28. , -91.1817]),
(5, 5, False): array([-16. , -96.69942]),
(5, 6, False): array([ -6. , -111.2598]),
(5, 7, False): array([ -56. , -109.442169]),
(5, 8, False): array([ -40. , -137.69559]),
(5, 9, False): array([-65. , -88.14951]),
(5, 10, False): array([-215. , -669.01698]),
(6, 1, False): array([-102. , -289.58679]),
(6, 2, False): array([ -38. , -201.4119]),
(6, 3, False): array([ -20. , -194.69349]),
(6, 4, False): array([ -29. , -159.62859]),
(6, 5, False): array([ -22. , -154.7541]),
(6, 6, False): array([ -35. , -182.84022]),
(6, 7, False): array([ -80. , -203.4369]),
(6, 8, False): array([ -56. , -214.73451]),
(6, 9, False): array([ -79. , -252.32229]),
(6, 10, False): array([ -336. , -1098.09261]),
(7, 1, False): array([-155. , -378.93051]),
(7, 2, False): array([ -71. , -293.5179]),
(7, 3, False): array([ -53. , -260.2989]),
(7, 4, False): array([ -35. , -240.885]),
(7, 5, False): array([ -43. , -260.1756]),
(7, 6, False): array([ -27. , -248.0913]),
(7, 7, False): array([ -96. , -285.94899]),
(7, 8, False): array([-128. , -270.126]),
(7, 9, False): array([-100. , -327.3066]),
(7, 10, False): array([ -486. , -1364.18769]),
(8, 1, False): array([-183. , -504.73269]),
(8, 2, False): array([ -89. , -383.55939]),
(8, 3, False): array([ -70. , -308.7297]),
(8, 4, False): array([ -65. , -333.3861]),
(8, 5, False): array([ -44. , -325.944]),
(8, 6, False): array([ -24. , -311.6088]),
(8, 7, False): array([-126. , -334.20402]),
(8, 8, False): array([-128. , -354.7512]),
(8, 9, False): array([-149. , -469.096659]),
(8, 10, False): array([ -595. , -1897.15851]),
(9, 1, False): array([-255. , -478.1538]),
(9, 2, False): array([-101. , -182.9151]),
(9, 3, False): array([ -69. , -236.19339]),
(9, 4, False): array([ -49. , -178.3179]),
(9, 5, False): array([ -74. , -140.7339]),
(9, 6, False): array([ -32. , -168.69051]),
(9, 7, False): array([-175. , -122.6988]),
(9, 8, False): array([-159. , -175.57749]),
(9, 9, False): array([-154. , -283.9221]),
(9, 10, False): array([ -708. , -1626.14529]),
(10, 1, False): array([-299. , -464.78061]),
(10, 2, False): array([-110. , -68.126949]),
(10, 3, False): array([-82. , -88.0911]),
(10, 4, False): array([-71. , -67.2651]),
(10, 5, False): array([ -57. , -105.867]),
(10, 6, False): array([-76. , -85.7988]),
(10, 7, False): array([-187. , -105.2406]),
(10, 8, False): array([-166. , -96.0408]),
(10, 9, False): array([-213. , -202.74471]),
(10, 10, False): array([ -862. , -1271.21328]),
(11, 1, False): array([-322. , -478.314]),
(11, 2, False): array([-116. , -158.4081]),
(11, 3, False): array([-138. , -115.6374]),
(11, 4, False): array([ -93. , -120.2139]),
(11, 5, False): array([-81. , -77.5791]),
(11, 6, False): array([-60. , -52.362]),
(11, 7, False): array([-189. , -89.847]),
(11, 8, False): array([-220. , -73.95651]),
(11, 9, False): array([-210. , -156.726]),
(11, 10, False): array([ -991. , -1115.4213]),
(12, 1, False): array([ -690. , -2033.11569]),
(12, 1, True): array([-29. , -51.38838]),
(12, 2, False): array([ -287. , -1654.0609]),
(12, 2, True): array([-11. , -35.145]),
(12, 3, False): array([ -178. , -1598.61951]),
(12, 3, True): array([-18. , -22.5162]),
(12, 4, False): array([ -162. , -1499.909]),
(12, 4, True): array([ -4. , -34.81938]),
(12, 5, False): array([ -93. , -1360.7997]),
(12, 5, True): array([-10. , -5.20209]),
(12, 6, False): array([ -136. , -1492.6325]),
(12, 6, True): array([-8. , -7.97814]),
(12, 7, False): array([ -396. , -1458.6597]),
(12, 7, True): array([-21. , -11.45709]),
(12, 8, False): array([ -484. , -1443.1528]),
(12, 8, True): array([-11. , 2.63979]),
(12, 9, False): array([ -446. , -1648.5559]),
(12, 9, True): array([-21. , -19.1565]),
(12, 10, False): array([-1840. , -7324.52558]),
(12, 10, True): array([-118. , -191.551869]),
(13, 1, False): array([ -642. , -2134.527]),
(13, 1, True): array([ -60. , -135.86859]),
(13, 2, False): array([ -305. , -1812.24041]),
(13, 2, True): array([-33. , -56.163951]),
(13, 3, False): array([ -207. , -1764.6939]),
(13, 3, True): array([-44. , -59.5476]),
(13, 4, False): array([ -156. , -1702.34]),
(13, 4, True): array([-23. , -37.79379]),
(13, 5, False): array([ -137. , -1635.372]),
(13, 5, True): array([-35. , -71.66781]),
(13, 6, False): array([ -137. , -1545.6549]),
(13, 6, True): array([-27. , -32.38929]),
(13, 7, False): array([ -389. , -1701.4319]),
(13, 7, True): array([-28. , -24.9084]),
(13, 8, False): array([ -452. , -1719.2909]),
(13, 8, True): array([-42. , -22.4433]),
(13, 9, False): array([ -513. , -1859.5649]),
(13, 9, True): array([-60. , -71.9613]),
(13, 10, False): array([-2054. , -7899.15579]),
(13, 10, True): array([-227. , -407.85237]),
(14, 1, False): array([ -653. , -2365.334]),
(14, 1, True): array([ -84. , -137.0394]),
(14, 2, False): array([ -287. , -1933.1049]),
(14, 2, True): array([-23. , -56.37969]),
(14, 3, False): array([ -227. , -1794.647]),
(14, 3, True): array([-26. , -81.47169]),
(14, 4, False): array([ -159. , -1917.795]),
(14, 4, True): array([-32. , -62.40321]),
(14, 5, False): array([ -93. , -1864.604]),
(14, 5, True): array([-11. , -70.6833]),
(14, 6, False): array([ -91. , -1793.528]),
(14, 6, True): array([-25. , -25.1127]),
(14, 7, False): array([ -349. , -1803.92]),
(14, 7, True): array([-68. , -33.62031]),
(14, 8, False): array([ -390. , -1875.5337]),
(14, 8, True): array([-41. , -52.01649]),
(14, 9, False): array([ -480. , -1847.6301]),
(14, 9, True): array([-61. , -84.19752]),
(14, 10, False): array([-1938. , -8550.3029]),
(14, 10, True): array([-243. , -487.326501]),
(15, 1, False): array([ -664. , -2515.257]),
(15, 1, True): array([ -69. , -197.685]),
(15, 2, False): array([ -249. , -2058.43]),
(15, 2, True): array([-25. , -67.73319]),
(15, 3, False): array([ -222. , -2117.3099]),
(15, 3, True): array([-31. , -66.2139]),
(15, 4, False): array([ -139. , -2089.781]),
(15, 4, True): array([-19. , -68.714559]),
(15, 5, False): array([ -151. , -2073.53]),
(15, 5, True): array([ -5. , -67.167]),
(15, 6, False): array([ -148. , -1925.9759]),
(15, 6, True): array([-26. , -47.88261]),
(15, 7, False): array([ -421. , -2013.1271]),
(15, 7, True): array([-52. , -47.7738]),
(15, 8, False): array([ -428. , -2117.806]),
(15, 8, True): array([-57. , -60.4287]),
(15, 9, False): array([ -464. , -2112.437]),
(15, 9, True): array([-52. , -81.621]),
(15, 10, False): array([-2082. , -9319.03]),
(15, 10, True): array([-283. , -540.15948]),
(16, 1, False): array([ -681. , -2582.591]),
(16, 1, True): array([ -97. , -218.9151]),
(16, 2, False): array([ -200. , -2206.429]),
(16, 2, True): array([ -30. , -110.4948]),
(16, 3, False): array([ -237. , -2171.531]),
(16, 3, True): array([-44. , -76.79619]),
(16, 4, False): array([ -176. , -2267.706]),
(16, 4, True): array([-33. , -96.66261]),
(16, 5, False): array([ -148. , -2285.84]),
(16, 5, True): array([ -23. , -103.60971]),
(16, 6, False): array([ -145. , -2279.815]),
(16, 6, True): array([ -24. , -115.3071]),
(16, 7, False): array([ -409. , -2100.074]),
(16, 7, True): array([ -58. , -109.5507]),
(16, 8, False): array([ -467. , -2086.632]),
(16, 8, True): array([ -61. , -117.4392]),
(16, 9, False): array([ -526. , -2225.294]),
(16, 9, True): array([ -49. , -154.9737]),
(16, 10, False): array([-2117. , -9636.8619]),
(16, 10, True): array([-278. , -621.26361]),
(17, 1, False): array([ -636. , -2626.469]),
(17, 1, True): array([ -83. , -243.5733]),
(17, 2, False): array([ -124. , -2416.71]),
(17, 2, True): array([ -16. , -127.22949]),
(17, 3, False): array([ -141. , -2387.71]),
(17, 3, True): array([ -6. , -147.834]),
(17, 4, False): array([ -48. , -2351.502]),
(17, 4, True): array([ -3. , -113.5998]),
(17, 5, False): array([ -49. , -2319.5]),
(17, 5, True): array([ -2. , -113.59971]),
(17, 6, False): array([ 15. , -2441.071]),
(17, 6, True): array([ 4. , -104.49549]),
(17, 7, False): array([ -58. , -2419.169]),
(17, 7, True): array([ -26. , -124.4169]),
(17, 8, False): array([ -314. , -2268.631]),
(17, 8, True): array([-65. , -96.44841]),
(17, 9, False): array([ -343. , -2343.66]),
(17, 9, True): array([ -35. , -132.2631]),
(17, 10, False): array([ -1704. , -10278.754]),
(17, 10, True): array([-247. , -736.54281]),
(18, 1, False): array([ -338. , -2711.181]),
(18, 1, True): array([ -45. , -245.4489]),
(18, 2, False): array([ 139. , -2536.97]),
(18, 2, True): array([ 9. , -136.1511]),
(18, 3, False): array([ 126. , -2631.75]),
(18, 3, True): array([ 15. , -141.84459]),
(18, 4, False): array([ 157. , -2533.51]),
(18, 4, True): array([ 13. , -192.861]),
(18, 5, False): array([ 183. , -2467.54]),
(18, 5, True): array([ 26. , -133.2549]),
(18, 6, False): array([ 226. , -2484.19]),
(18, 6, True): array([ 35. , -127.494]),
(18, 7, False): array([ 360. , -2399.66]),
(18, 7, True): array([ 78. , -94.329]),
(18, 8, False): array([ 44. , -2457.42]),
(18, 8, True): array([ 21. , -125.3529]),
(18, 9, False): array([ -104. , -2601.83]),
(18, 9, True): array([ -24. , -150.4134]),
(18, 10, False): array([ -807. , -10694.56]),
(18, 10, True): array([-139. , -755.2809]),
(19, 1, False): array([-391. , -759.39]),
(19, 1, True): array([-62. , -56.0502]),
(19, 2, False): array([1378. , -712.29]),
(19, 2, True): array([242. , -7.2441]),
(19, 3, False): array([1483. , -669.7]),
(19, 3, True): array([253. , -13.842]),
(19, 4, False): array([1449. , -725.69]),
(19, 4, True): array([208. , -15.291]),
(19, 5, False): array([1581. , -698.3]),
(19, 5, True): array([257. , -4.3299]),
(19, 6, False): array([1781. , -701.94]),
(19, 6, True): array([318. , -9.99]),
(19, 7, False): array([2172. , -679.58]),
(19, 7, True): array([338. , -13.3542]),
(19, 8, False): array([2128. , -699.78]),
(19, 8, True): array([365. , -14.805]),
(19, 9, False): array([1067. , -710.4]),
(19, 9, True): array([178. , -24.912]),
(19, 10, False): array([ -280. , -2728.2]),
(19, 10, True): array([ -77. , -157.9851]),
(20, 1, False): array([ 719. , -1185.2]),
(20, 1, True): array([ 94. , -58.3299]),
(20, 2, False): array([ 3195. , -1052.2]),
(20, 2, True): array([408. , -7.5528]),
(20, 3, False): array([ 3319. , -1135.4]),
(20, 3, True): array([420. , -8.8821]),
(20, 4, False): array([ 3387., -1158.]),
(20, 4, True): array([452. , -21.843]),
(20, 5, False): array([ 3373. , -1075.8]),
(20, 5, True): array([378. , -21.339]),
(20, 6, False): array([ 3442. , -1111.5]),
(20, 6, True): array([479. , -18.378]),
(20, 7, False): array([ 3945. , -1087.4]),
(20, 7, True): array([518. , -3.6351]),
(20, 8, False): array([ 4105. , -1113.8]),
(20, 8, True): array([499. , 2.43]),
(20, 9, False): array([ 3773., -1113.]),
(20, 9, True): array([439., -27.]),
(20, 10, False): array([ 8739. , -4361.4]),
(20, 10, True): array([1033. , -149.769]),
(21, 1, False): array([1410., -548.]),
(21, 1, True): array([1118. , -151.245]),
(21, 2, False): array([1913., -573.]),
(21, 2, True): array([1481. , -46.2069]),
(21, 3, False): array([1970., -527.]),
(21, 3, True): array([1556. , -23.697]),
(21, 4, False): array([1942., -544.]),
(21, 4, True): array([1567. , -31.7349]),
(21, 5, False): array([1974., -552.]),
(21, 5, True): array([1485. , -6.039]),
(21, 6, False): array([1971., -552.]),
(21, 6, True): array([1583. , -29.295]),
(21, 7, False): array([2036., -565.]),
(21, 7, True): array([1643. , 23.0679]),
(21, 8, False): array([2100., -510.]),
(21, 8, True): array([1632. , -11.835]),
(21, 9, False): array([1973., -510.]),
(21, 9, True): array([1630. , -11.5659]),
(21, 10, False): array([ 7762., -2163.]),
(21, 10, True): array([6260. , -326.32461])})
len(Q): 280
defaultdict(<function mc_prediction_q.<locals>.<lambda> at 0x1a167f6268>,
{(4, 1, False): array([-0.86206897, -0.60145714]),
(4, 2, False): array([-0.41463415, -0.25357989]),
(4, 3, False): array([-0.1 , -0.27897675]),
(4, 4, False): array([-0.12280702, -0.26081514]),
(4, 5, False): array([-0.03225806, -0.27761461]),
(4, 6, False): array([-0.28 , -0.3168]),
(4, 7, False): array([-0.5 , -0.39280816]),
(4, 8, False): array([-0.54166667, -0.42323351]),
(4, 9, False): array([-0.55555556, -0.41432378]),
(4, 10, False): array([-0.59036145, -0.44386846]),
(5, 1, False): array([-0.75308642, -0.53666698]),
(5, 2, False): array([-0.42307692, -0.28287262]),
(5, 3, False): array([-0.225 , -0.27006762]),
(5, 4, False): array([-0.26415094, -0.281425 ]),
(5, 5, False): array([-0.18181818, -0.26492992]),
(5, 6, False): array([-0.05769231, -0.2974861 ]),
(5, 7, False): array([-0.60869565, -0.31003447]),
(5, 8, False): array([-0.39215686, -0.37724819]),
(5, 9, False): array([-0.69892473, -0.28253048]),
(5, 10, False): array([-0.5613577 , -0.45573364]),
(6, 1, False): array([-0.75 , -0.51897274]),
(6, 2, False): array([-0.28787879, -0.36095323]),
(6, 3, False): array([-0.14492754, -0.3339511 ]),
(6, 4, False): array([-0.25217391, -0.30005374]),
(6, 5, False): array([-0.18333333, -0.29760404]),
(6, 6, False): array([-0.24137931, -0.2992475 ]),
(6, 7, False): array([-0.52631579, -0.3781355 ]),
(6, 8, False): array([-0.3943662 , -0.39400828]),
(6, 9, False): array([-0.59398496, -0.41983742]),
(6, 10, False): array([-0.58131488, -0.49374668]),
(7, 1, False): array([-0.75609756, -0.49147926]),
(7, 2, False): array([-0.32718894, -0.38318264]),
(7, 3, False): array([-0.28961749, -0.3470652 ]),
(7, 4, False): array([-0.18134715, -0.31947613]),
(7, 5, False): array([-0.24293785, -0.33789039]),
(7, 6, False): array([-0.14754098, -0.33256206]),
(7, 7, False): array([-0.43636364, -0.36473085]),
(7, 8, False): array([-0.65306122, -0.35403145]),
(7, 9, False): array([-0.53763441, -0.43237332]),
(7, 10, False): array([-0.62953368, -0.4583964 ]),
(8, 1, False): array([-0.7787234 , -0.52521612]),
(8, 2, False): array([-0.35458167, -0.38471353]),
(8, 3, False): array([-0.2892562, -0.3334014]),
(8, 4, False): array([-0.28384279, -0.3401899 ]),
(8, 5, False): array([-0.18181818, -0.339525 ]),
(8, 6, False): array([-0.10909091, -0.29847586]),
(8, 7, False): array([-0.50806452, -0.33963823]),
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def decayed_epsilon(i, epsilon_start, epsilon_end, epsilon_fixed_after):
if i < epsilon_fixed_after:
epsilon_i = epsilon_start - i*(epsilon_start - epsilon_end)/epsilon_fixed_after
else:
epsilon_i = epsilon_end
return epsilon_i
epsilon_start, epsilon_end = 1.0, 0.1
num_episodes = 1000
epsilon_fixed_after = 0.75 * num_episodes
# for i in range(num_episodes):
# print(i, decayed_epsilon(i, epsilon_start, epsilon_end, epsilon_fixed_after))
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(15, 3, True)
End game! Reward: -1.0
You lost :(
(20, 4, False)
End game! Reward: 1.0
You won :)
(10, 1, False)
(20, 1, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((17, 4, False), 1, -1.0)]
[((16, 10, False), 1, 0.0), ((18, 10, False), 1, -1.0)]
[((9, 2, False), 1, 0.0), ((19, 2, False), 1, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v1')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(12, 9, False)
End game! Reward: -1.0
You lost :(
(17, 9, False)
End game! Reward: -1.0
You lost :(
(15, 7, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
gamma=0.9
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
episode = generate_episode_from_limit_stochastic(env)
print(episode)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
print(discounts)
type(states)
N = defaultdict(lambda: np.zeros(env.action_space.n))
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
print(rewards[i:], discounts[:-(1+i)])
print(Q)
###Output
[((18, 2, True), 1, 0.0), ((17, 2, False), 1, -1.0)]
[1. 0.9 0.81]
(0.0, -1.0) [1. 0.9]
(-1.0,) [1.]
defaultdict(<function <lambda> at 0x7fcfa85db9d0>, {(18, 2, True): array([ 0. , -0.9]), (17, 2, False): array([ 0., -1.])})
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# generate an episode
episode = generate_episode(env)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Table of Contents1 Monte Carlo Methods1.0.1 Part 0: Explore BlackjackEnv1.0.2 Part 1: MC Prediction1.0.3 Part 2: MC Control Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
action = env.action_space.sample()
print(state, "STICK" if action == 0 else "HIT")
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(15, 1, False) STICK
End game! Reward: -1.0
You lost :(
(17, 10, False) STICK
End game! Reward: -1.0
You lost :(
(5, 6, False) HIT
(8, 6, False) HIT
(17, 6, False) STICK
End game! Reward: 0.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((20, 8, False), 0, 1.0)]
[((8, 5, False), 0, -1.0)]
[((12, 1, False), 1, 0), ((21, 1, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
env.action_space.n
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0, using_first_visit=True):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
if (using_first_visit):
# IMPLEMENTING FIRST-VISIT MC
list_of_seen_states = []
for i, (state, action, reward) in enumerate(episode):
if (state not in list_of_seen_states):
sum_rewards_with_discount = 0
# For each (S_i, A_i), we calculate V(S_i, A_i) = R_i+1 + gamma * R_i+2 + gamma² * R_i+3 etc...
for j in range(i, len(episode)):
R_j = episode[j][2]
sum_rewards_with_discount += R_j * gamma**(j-i)
returns_sum[state][action] += sum_rewards_with_discount
N[state] += 1
list_of_seen_states.append(state)
else:
# IMPLEMENTING EVERY-VISIT MC
for i, (state, action, reward) in enumerate(episode):
sum_rewards_with_discount = 0
# For each (S_i, A_i), we calculate V(S_i, A_i) = R_i+1 + gamma * R_i+2 + gamma² * R_i+3 etc...
for j in range(i, len(episode)):
R_j = episode[j][2]
sum_rewards_with_discount += R_j * gamma**(j-i)
returns_sum[state][action] += sum_rewards_with_discount
N[state] += 1
# We now calculate the mean reward for each state
for state in returns_sum.keys():
Q[state] = returns_sum[state] / N[state]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 50000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 50000/50000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
np.argmax([1,3,2])
def epsilon_greedy(Q, current_episode, total_num_episodes, env, start_epsilon = 1):
'''
Implements the epsilon-greedy algorithm, with a decaying epsilon
'''
policy = {}
rdm = np.random.random()
# epsilon gets closer to 0, never under 0.1
epsilon = start_epsilon - (current_episode/total_num_episodes)*start_epsilon
epsilon = 0.1 if epsilon < 0.1 else epsilon
for state in Q.keys():
# Exploration
if (rdm < epsilon):
policy[state] = env.action_space.sample()
# Exploitation
else:
policy[state] = np.argmax(Q[state])
return policy
def generate_episode_from_policy(env, policy=None):
'''
Generates an episode following the inputed policy (dictionnary where each key is a possible state).
If no policy is provided, we perform a random choice.
'''
episode = []
state = env.reset()
while True:
if (policy is None):
action = env.action_space.sample()
else:
try:
action = policy[state]
except:
# If state is not defined in the policy
action = env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# initialize policy placeholder variable
policy = None
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode_from_policy(env, policy)
# IMPLEMENTING FIRST-VISIT MC
list_of_seen_states = []
for i, (state, action, reward) in enumerate(episode):
if (state not in list_of_seen_states):
sum_rewards_with_discount = 0
# For each (S_i, A_i), we calculate V(S_i, A_i) = R_i+1 + gamma * R_i+2 + gamma² * R_i+3 etc...
for j in range(i, len(episode)):
R_j = episode[j][2]
sum_rewards_with_discount += R_j * gamma**(j-i)
list_of_seen_states.append(state)
Q[state][action] = Q[state][action] + alpha * (sum_rewards_with_discount - Q[state][action])
# updating policy according to new Q-table after every episode passes
policy = epsilon_greedy(Q, i_episode, num_episodes, env)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.2)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(17, 4, False)
End game! Reward: -1.0
You lost :(
(21, 3, True)
(15, 3, False)
(20, 3, False)
End game! Reward: 1.0
You won :)
(10, 2, False)
(20, 2, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((19, 10, False), 0, 1.0)]
[((15, 4, False), 1, 0), ((21, 4, False), 1, -1)]
[((20, 10, False), 0, 0.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
print(env.action_space.n)
print(np.zeros(2))
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
cumu_reward = 0
for state, action, reward in reversed(episode):
cumu_reward = reward + (gamma * cumu_reward)
returns_sum[state][action] += cumu_reward
N[state][action] += 1.0
Q = {k: returns_sum[k]/N[k] for k in returns_sum if k in N}
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def select_action(nA, Q_s, epsilon):
prob = epsilon * np.ones(nA, dtype=np.float32) / nA
prob[np.argmax(Q_s)] += 1 - epsilon
return np.random.choice(np.arange(nA), p=prob)
def generate_epsilon_episode(env, Q, epsilon):
state = env.reset()
episode = []
nA = env.action_space.n
while True:
action = select_action(nA, Q[state], epsilon)
next_state, reward, done, info = env.step(action)
episode.append([state, action, reward])
state = next_state
if done:
break
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, epsilon_decay=0.9999, min_eps=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
policy = defaultdict(int)
# loop over episodes
epsilon = eps_start
for i_episode in range(1, num_episodes+1):
epsilon = max(epsilon_decay*epsilon, min_eps)
episode = generate_epsilon_episode(env, Q, epsilon)
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
cumulative_reward = 0
states, actions, rewards = zip(*list(reversed(episode)))
for idx, state in enumerate(states):
action = actions[idx]
reward = rewards[idx]
cumulative_reward += reward
Q[state][action] = (1 - alpha)*Q[state][action] + alpha*cumulative_reward
## TODO: complete the function
updated_policy = dict((k, np.argmax(v)) for k, v in Q.items())
policy = defaultdict(int, updated_policy)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
np.random.seed(0)
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print('curr state =', state)
action = env.action_space.sample()
print('curr action =', action)
state, reward, done, info = env.step(action)
print('new state =', state)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
curr state = (11, 5, False)
curr action = 0
new state = (11, 5, False)
End game! Reward: -1.0
You lost :(
curr state = (12, 4, False)
curr action = 0
new state = (12, 4, False)
End game! Reward: -1.0
You lost :(
curr state = (15, 6, False)
curr action = 1
new state = (25, 6, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
#print('curr_state = ', state)
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
#print('action, prob = ', action, probs)
next_state, reward, done, info = bj_env.step(action)
#print('NS, reward, done = ', next_state, reward, done)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
#print(i)
print(generate_episode_from_limit_stochastic(env))
###Output
[((13, 3, True), 1, 0.0), ((15, 3, True), 1, 0.0), ((15, 3, False), 1, 0.0), ((17, 3, False), 1, -1.0)]
[((19, 1, False), 0, -1.0)]
[((7, 10, False), 1, 0.0), ((16, 10, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q_first(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode_from_limit_stochastic(env)
## TODO: complete the function
#for i in range(len(episode)):
state, action, reward = episode[0]
N[state][action] += 1
Q[state][action] = (reward + Q[state][action]*N[state][action])/N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q_first(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
def mc_prediction_q_every(env, num_episodes, generate_episode, gamma=0.9):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
## TODO: complete the function
M = len(episode)
for i in range(M):
state, action, reward = episode[i]
#print((gamma**(M-i)),reward, (gamma**(M-i))*reward)
Q[state][action] += (gamma**(M-i))*reward
N[state][action] += 1
num = 0
for state in Q.keys():
num += N[state][0]+N[state][1]
#print(Q[state][0],N[state][0])
if N[state][0] >0:
Q[state][0] = Q[state][0]/N[state][0]
if N[state][1] >0:
Q[state][1] = Q[state][1]/N[state][1]
print(num)
return Q
# obtain the action-value function
Q = mc_prediction_q_every(env, 500000, generate_episode_from_limit_stochastic)
#print(Q)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.6561 0.0 0.0
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 1.0 0.9
0.5904900000000001 0.0 0.0
0.6561 0.0 0.0
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.6561 0.0 0.0
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 0.0 0.0
0.81 0.0 0.0
0.9 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.6561 0.0 0.0
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.5904900000000001 0.0 0.0
0.6561 0.0 0.0
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 0.0 0.0
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.6561 0.0 0.0
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 1.0 0.9
0.9 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.6561 0.0 0.0
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.6561 0.0 0.0
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 0.0 0.0
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 1.0 0.9
0.9 -1.0 -0.9
0.9 0.0 0.0
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 1.0 0.9
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 0.0 0.0
0.7290000000000001 0.0 0.0
0.81 0.0 0.0
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.9 -1.0 -0.9
0.81 0.0 0.0
0.9 1.0 0.9
0.9 1.0 0.9
0.81 0.0 0.0
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, eps, gamma=1.0):
#def mc_control(env, num_episodes, alpha, gamma=1.0, eps):
#mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode_from_limit_stochastic(env)
## TODO: complete the function
M = len(episode)
for i in range(M):
state, action, reward = episode[i]
Q[state][action] += eps*(reward-Q[state][action])
N[state][action] += 1
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA))
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(actions, epsilon, nA):
all_actions = np.ones(nA)*epsilon/nA
greedy_action = np.argmax(actions)
all_actions[greedy_action] += (1-epsilon)
return all_actions
def update_Q(env, episode, Q, alpha, gamma, nA):
already_saw = []
returns_sum = defaultdict(lambda: np.zeros(nA))
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
if((state,action) not in already_saw):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
Q[state][actions[i]] = Q[state][actions[i]] + alpha*(returns_sum[state][actions[i]]-Q[state][actions[i]])
already_saw.append((state,action))
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma, nA)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(17, 2, False)
End game! Reward: -1
You lost :(
(20, 5, False)
End game! Reward: -1
You lost :(
(20, 7, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((17, 2, True), 0, 1.0)]
[((17, 5, False), 1, 0), ((18, 5, False), 1, 0), ((20, 5, False), 1, -1)]
[((10, 9, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
episode = generate_episode(env)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(episode))])
for i, (state, action, reward) in enumerate(episode):
returns_sum[state][action] += sum(rewards[i:]*discounts[:len(rewards)-i])
N[state][action] += 1
Q[state][action] = returns_sum[state][action]/N[state][action]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def epsilon_soft_policy(Qs, epsilon, nA):
policy = np.ones(nA)*epsilon/nA
Q_arg_max = np.argmax(Qs)
policy[Q_arg_max] = 1 - epsilon + epsilon/nA
return policy
def generate_episode_with_Q(env, Q, epsilon, nA):
episode = []
state = env.reset()
while True:
probs = epsilon_soft_policy(Q[state], epsilon, nA)
action = np.random.choice(np.arange(nA), p=probs) if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def update_Q(Q, episode, alpha, gamma):
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(episode))])
for i, (state, action, reward) in enumerate(episode):
Q_prev = Q[state][action]
Q[state][action] = Q_prev + alpha*(sum(rewards[i:]*discounts[:len(rewards)-i]) - Q_prev)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon_start=1.0, epsilon_decay=0.99999, epsilon_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = epsilon_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
epsilon = max(epsilon*epsilon_decay, epsilon_min)
episode = generate_episode_with_Q(env, Q, epsilon, nA)
Q = update_Q(Q, episode, alpha, gamma)
policy = dict((key, np.argmax(value)) for key, value in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 1/50)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(19, 2, False)
End game! Reward: 0.0
You lost :(
(20, 9, False)
End game! Reward: -1
You lost :(
(10, 7, False)
(12, 7, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(30):
print(generate_episode_from_limit_stochastic(env))
###Output
[((9, 5, False), 1, 0), ((11, 5, False), 0, 1.0)]
[((12, 6, False), 1, 0), ((21, 6, False), 0, 0.0)]
[((20, 1, False), 0, -1.0)]
[((16, 10, False), 1, -1)]
[((14, 10, True), 1, 0), ((12, 10, False), 1, 0), ((19, 10, False), 0, -1.0)]
[((13, 6, False), 1, 0), ((21, 6, False), 1, -1)]
[((7, 6, False), 1, 0), ((18, 6, True), 0, 1.0)]
[((16, 9, False), 0, 1.0)]
[((5, 8, False), 1, 0), ((14, 8, False), 1, -1)]
[((13, 4, False), 1, -1)]
[((20, 10, False), 1, -1)]
[((9, 10, False), 1, 0), ((18, 10, False), 0, -1.0)]
[((13, 4, False), 1, -1)]
[((14, 10, False), 1, -1)]
[((16, 9, False), 1, 0), ((17, 9, False), 1, -1)]
[((12, 10, False), 1, -1)]
[((18, 10, False), 1, -1)]
[((21, 10, True), 1, 0), ((18, 10, False), 1, 0), ((19, 10, False), 0, 1.0)]
[((9, 9, False), 1, 0), ((17, 9, False), 1, 0), ((19, 9, False), 0, 1.0)]
[((12, 1, False), 1, 0), ((16, 1, False), 1, -1)]
[((13, 7, False), 1, 0), ((20, 7, False), 0, 0.0)]
[((17, 8, False), 1, -1)]
[((16, 10, True), 1, 0), ((17, 10, True), 1, 0), ((16, 10, False), 1, 0), ((19, 10, False), 1, -1)]
[((14, 5, False), 1, -1)]
[((15, 3, False), 1, 0), ((21, 3, False), 0, 1.0)]
[((8, 10, False), 1, 0), ((10, 10, False), 1, 0), ((20, 10, False), 0, 0.0)]
[((17, 10, False), 1, -1)]
[((7, 10, False), 0, -1.0)]
[((15, 5, True), 0, 1.0)]
[((14, 8, False), 1, 0), ((17, 8, False), 0, 1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
total = sum([reward*(gamma**i) for i, (state, action, reward) in enumerate(episode)])
prev_reward = 0
_N = defaultdict(lambda: np.zeros(env.action_space.n))
for i, (state, action, reward) in enumerate(episode):
total = total - prev_reward
if not _N[state][action]: #first visit per episode
_N[state][action] = 1
N[state][action] += 1
returns_sum[state][action] += total
total = total/gamma
prev_reward = reward
for state in returns_sum:
Q[state] = returns_sum[state]/N[state]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, eps, nA):
episode = []
state = env.reset()
while True:
probs = get_prob(Q[state], eps, nA)
action = np.random.choice(np.arange(nA), p=probs) if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_prob(Q_s, eps, nA):
"""
Q: list
eps: float
rtype: list
"""
max_id = np.argmax(nA)
prob = np.ones(nA)/nA*eps
prob[max_id] += (1-eps)
return prob
def update_Q(Q):
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps=1.0, decay_ratio=0.99999, min_eps=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
epsilon = eps
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
epsilon = max(epsilon*decay_ratio, min_eps)
episode = generate_episode_from_Q(env, Q, epsilon, nA)
total = sum([reward*(gamma**i) for i, (state, action, reward) in enumerate(episode)])
prev_reward = 0
_N = defaultdict(lambda: np.zeros(nA))
for i, (state, action, reward) in enumerate(episode):
if state not in _N or action not in _N[state]:
Q[state][action] = (1-alpha)*Q[state][action] + alpha*total
total, prev_reward= total-prev_reward, reward
policy = {state:Q[state].argmax() for state in Q}
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.05, 1.0, 1.0, 0.8)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(10):
state = env.reset()
print(state)
while True:
action = env.action_space.sample()
print(action)
state, reward, done, info = env.step(action)
print(state)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(14, 4, False)
0
(14, 4, False)
End game! Reward: -1.0
You lost :(
(15, 5, False)
1
(23, 5, False)
End game! Reward: -1
You lost :(
(19, 6, False)
0
(19, 6, False)
End game! Reward: 1.0
You won :)
(16, 3, False)
0
(16, 3, False)
End game! Reward: -1.0
You lost :(
(16, 2, False)
0
(16, 2, False)
End game! Reward: 1.0
You won :)
(9, 9, False)
0
(9, 9, False)
End game! Reward: -1.0
You lost :(
(16, 2, True)
1
(18, 2, True)
1
(14, 2, False)
1
(15, 2, False)
1
(25, 2, False)
End game! Reward: -1
You lost :(
(13, 10, False)
0
(13, 10, False)
End game! Reward: 1.0
You won :)
(14, 2, False)
1
(21, 2, False)
0
(21, 2, False)
End game! Reward: 1.0
You won :)
(21, 9, True)
0
(21, 9, True)
End game! Reward: 0.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(20):
print(generate_episode_from_limit_stochastic(env))
###Output
[((18, 10, False), 1, -1)]
[((8, 8, False), 1, 0), ((18, 8, False), 1, -1)]
[((13, 8, False), 1, 0), ((20, 8, False), 0, 1.0)]
[((11, 7, False), 0, 1.0)]
[((18, 5, False), 0, 1.0)]
[((20, 6, False), 0, 1.0)]
[((9, 3, False), 1, 0), ((13, 3, False), 1, 0), ((17, 3, False), 1, -1)]
[((13, 10, False), 1, 0), ((14, 10, False), 0, 1.0)]
[((10, 1, False), 1, 0), ((20, 1, False), 1, -1)]
[((17, 10, False), 1, -1)]
[((6, 7, False), 1, 0), ((16, 7, False), 1, 0), ((20, 7, False), 0, 1.0)]
[((16, 6, True), 1, 0), ((16, 6, False), 1, -1)]
[((18, 1, True), 1, 0), ((20, 1, True), 0, -1.0)]
[((20, 6, False), 0, 0.0)]
[((16, 10, False), 1, 0), ((19, 10, False), 1, -1)]
[((13, 10, False), 1, -1)]
[((19, 1, True), 0, 1.0)]
[((10, 10, False), 0, -1.0)]
[((12, 7, False), 0, -1.0)]
[((19, 10, False), 0, 0.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n, dtype=np.float))
Q = defaultdict(lambda: np.zeros(env.action_space.n, dtype=np.float))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
new_episode = generate_episode(env)
_, _, rewards = zip(*new_episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, (S, A, R) in enumerate(new_episode):
Q[S][A] += sum(rewards[i:]*discounts[:-(i+1)])
N[S][A] += 1
Q = {k: np.divide(arr, N[k], out=np.zeros_like(arr), where=N[k]!=0) for k, arr in Q.items()}
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2], v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(bj_env, Q, eps):
nA = bj_env.action_space.n
episode = []
state = bj_env.reset()
while True:
action = np.random.choice(range(nA), p=get_probs(range(nA), eps, Q[state]))
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(arr, eps, Q_cur):
if np.random.uniform(0, 1, 1)[0] < eps:
return [1./len(arr) for _ in arr]
else:
best_actions = [1 if Q_cur[i] == max(Q_cur) else 0 for i in range(2)]
return [el/sum(best_actions) for el in best_actions]
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
eps = eps_start
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
new_episode = generate_episode_from_Q(env, Q, eps)
_, _, rewards = zip(*new_episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, (S, A, R) in enumerate(new_episode):
G = sum(rewards[i:]*discounts[:-(1+i)])
Q[S][A] = (1 - alpha)*Q[S][A] + alpha*G
eps = max(eps * eps_decay, eps)
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.01)
###Output
Episode 423000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(16, 2, False)
End game! Reward: -1.0
You lost :(
(5, 8, False)
End game! Reward: -1.0
You lost :(
(21, 6, True)
(14, 6, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((16, 10, False), 1, -1.0)]
[((20, 9, False), 1, -1.0)]
[((13, 3, False), 1, 0.0), ((18, 3, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# Because first-visit is unique through episode -> first-visit == every-visit ->
# -> Start to calc return from last staet-action pair
episode = generate_episode(env)
ep_cur_return = 0
for s_a_r_pair in episode[::-1]:
# No need to check for first visit (anyway first)
ep_cur_return = s_a_r_pair[2] + gamma*ep_cur_return
Q[s_a_r_pair[0]][s_a_r_pair[1]] = (Q[s_a_r_pair[0]][s_a_r_pair[1]]*N[s_a_r_pair[0]][s_a_r_pair[1]] +
ep_cur_return)
N[s_a_r_pair[0]][s_a_r_pair[1]] += 1
Q[s_a_r_pair[0]][s_a_r_pair[1]] /= N[s_a_r_pair[0]][s_a_r_pair[1]]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
def mc_prediction_q_udacity(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate an episode
episode = generate_episode(env)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
def mc_prediction_q_merged(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
N_my = defaultdict(lambda: np.zeros(env.action_space.n))
Q_my = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
# Because first-visit is unique through episode -> first-visit == every-visit ->
# -> Start to calc return from last staet-action pair
episode = generate_episode(env)
ep_cur_return = 0
for s_a_r_pair in episode[::-1]:
# No need to check for first visit (anyway first)
ep_cur_return = s_a_r_pair[2] + gamma*ep_cur_return
Q_my[s_a_r_pair[0]][s_a_r_pair[1]] = (Q_my[s_a_r_pair[0]][s_a_r_pair[1]]*N_my[s_a_r_pair[0]][s_a_r_pair[1]] +
ep_cur_return)
N_my[s_a_r_pair[0]][s_a_r_pair[1]] += 1
Q_my[s_a_r_pair[0]][s_a_r_pair[1]] /= N_my[s_a_r_pair[0]][s_a_r_pair[1]]
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q, Q_my
###Output
_____no_output_____
###Markdown
Q_merge, Q_my_merge = mc_prediction_q_merged(env, 50000, generate_episode_from_limit_stochastic, gamma=0.7) for k, v in Q.items(): if not np.allclose(Q_merge[k], Q_my_merge[k]): print(Q_merge[k], Q_my_merge[k])
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic, gamma=1)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# (state, V[state] = E_a~pi(a|s)[Q[state, a]] = {if (state_sum > 18)
# ... V[state] = 0.8*Q[s](a=0 i.e.stick) + 0.2*Q[s](a=1 i.e. hit);
# if (state_sum <= 18)
# ... V[state] = 0.2*Q[s](a=0 i.e. stick) + 0.8*Q[s](a=1 i.e. hit)
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control_udacity(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_decay = None):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
policy = defaultdict(lambda: env.action_space.sample())
eps = 1
if eps_decay is None:
eps_decay = lambda eps: eps - (1-0.1)/num_episodes
def policy_improve_epsilon_greedy(policy, Q, eps = 0.1, num_actions = nA):
for state, action_values in Q.items():
probs = [eps/nA]*nA
probs[np.argmax(action_values)] = 1 - eps + eps/nA
# sample from discrete action from [0, nA]
policy[state] = np.random.choice(np.arange(nA), p=probs)
return policy
def generate_episode_from_policy(env, policy):
episode = []
state = env.reset()
while True:
action = policy[state]
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_state, eps, nA):
probs = np.ones(nA)*eps/nA
probs[np.argmax(Q_state)] = 1 - eps + eps/nA
return probs
def generate_episode_from_Q(env, Q, eps, nA):
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], eps, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
### policy improvement
eps = eps_decay(eps)
#policy = policy_improve_epsilon_greedy(policy, Q, eps, nA)
episode = generate_episode_from_Q(env, Q, eps, nA)
### policy evaluation
G = 0
# function estimates for each state-action pair in the episode
for state, action, reward in episode[::-1]:
G = reward + gamma*G
Q[state][action] += alpha*(G - Q[state][action])
policy = dict((k, np.argmax(Q[k])) for k in Q)
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
%load_ext line_profiler
###Output
The line_profiler extension is already loaded. To reload it, use:
%reload_ext line_profiler
###Markdown
%lprun -f mc_control mc_control(env, 1000, 0.02) %lprun -f mc_control_udacity mc_control_udacity(env, 1000, 0.02)
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
# obtain the estimated optimal policy and action-value function
policy_05, Q_05 = mc_control(env, 500000, 0.5)
# obtain the estimated optimal policy and action-value function
policy_udacity, Q_udacity = mc_control_udacity(env, 500000, 0.02)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal. for i, k in enumerate(policy_udacity): if policy[k] != policy_udacity[k]: print(i) print(f"Different actions after convergence: state {k} - {policy[k]} and {policy_udacity[k]}")
###Code
# plot the policy
plot_policy(policy_udacity)
# plot the policy
plot_policy(policy_05)
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
%cd deep-reinforcement-learning/monte-carlo
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
/content/deep-reinforcement-learning/monte-carlo
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(13, 10, False)
End game! Reward: -1.0
You lost :(
(10, 10, False)
(17, 10, False)
End game! Reward: -1
You lost :(
(12, 7, False)
End game! Reward: -1.0
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((16, 10, False), 1, 0), ((20, 10, False), 0, 0.0)]
[((18, 2, False), 0, 1.0)]
[((20, 10, False), 0, 0.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## Done: complete the function
episode = generate_episode(env)
G = 0
for s, a, r in reversed(episode):
G = r + gamma * G
returns_sum[s][a] += G
N[s][a] += 1
Q[s][a] = returns_sum[s][a] / N[s][a]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def generate_episode_from_policy(bj_env, policy, N):
episode = []
state = bj_env.reset()
while True:
action = policy[state]
N[state][action] += 1
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
import random
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
N = defaultdict(lambda: np.zeros(nA))
policy = defaultdict(lambda: random.choice(range(nA)))
# loop over episodes
t = 0
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 10000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## Done: complete the function
# generate episode
episode = generate_episode_from_policy(env, policy, N)
# evaluate policy
G = 0
for s, a, r in reversed(episode):
G = r + gamma * G
Q[s][a] = Q[s][a] + alpha * (G - Q[s][a])
# improve policy (UCB)
t += len(episode)
for s in Q.keys():
if (N[s] == 0).any():
policy[s] = np.argmin(N[s])
else:
policy[s] = np.argmax(Q[s] + np.sqrt(np.log(t) / N[s]))
for s, a in Q.items():
policy[s] = np.argmax(Q[s])
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.01)
###Output
Episode 500000/500000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(15, 4, False)
End game! Reward: 1.0
You won :)
(11, 10, False)
End game! Reward: 1.0
You won :)
(17, 3, False)
End game! Reward: -1
You lost :(
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((12, 2, False), 0, -1.0)]
[((13, 10, False), 1, 0), ((19, 10, False), 0, -1.0)]
[((16, 7, False), 1, -1)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
Episode 500000/500000.
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
_____no_output_____
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
_____no_output_____
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
_____no_output_____
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, ?, ?)
###Output
_____no_output_____
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____
###Markdown
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
###Code
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
###Output
_____no_output_____
###Markdown
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
###Code
env = gym.make('Blackjack-v1')
###Output
_____no_output_____
###Markdown
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
###Code
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
###Code
for i_episode in range(3):
state = env.reset()
while True:
prev_state = state
action = env.action_space.sample()
state, reward, done, info = env.step(action)
print(f"S={prev_state}, A={action}, R={reward}, S'={state}")
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
S=(15, 10, False), A=0, R=-1.0, S'=(15, 10, False)
End game! Reward: -1.0
You lost :(
S=(11, 8, False), A=0, R=-1.0, S'=(11, 8, False)
End game! Reward: -1.0
You lost :(
S=(9, 6, False), A=0, R=1.0, S'=(9, 6, False)
End game! Reward: 1.0
You won :)
###Markdown
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
###Code
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
###Code
for i in range(3):
print(generate_episode_from_limit_stochastic(env))
###Output
[((18, 1, True), 1, 0.0), ((13, 1, False), 1, 0.0), ((20, 1, False), 0, 0.0)]
[((12, 10, False), 1, 0.0), ((20, 10, False), 0, 1.0)]
[((14, 1, False), 0, -1.0)]
###Markdown
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0, first_visit=True):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode(env)
episode = list(reversed(episode))
visited = dict() # contains index of the first occurance if (state, action)
if first_visit:
for i, (state, action, reward) in enumerate(episode):
visited[(state, action)] = i
G = 0
for i, (state, action, reward) in enumerate(episode):
G += reward
if not first_visit or visited[(state, action)] == i:
returns_sum[state][action] += G
N[state][action] += 1
G *= gamma
for k in returns_sum:
Q[k] = returns_sum[k] / N[k]
return Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
###Code
# obtain the action-value function
Q = mc_prediction_q(env, 50000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
###Output
<ipython-input-34-1d29bf91437b>:30: RuntimeWarning: invalid value encountered in true_divide
Q[k] = returns_sum[k] / N[k]
###Markdown
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
###Code
import pdb
def generate_episode_with_policy(env, policy, eps):
episode = []
state = env.reset()
while True:
if np.random.rand() > eps and state in policy:
action = policy[state] # greedy
else:
action = env.action_space.sample()
new_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
if done:
break
state = new_state
return episode
def mc_control(env, num_episodes, alpha, gamma=1.0):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
# loop over episodes
eps = 1
eps_delta = (1 - .1) / num_episodes
policy = {}
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
episode = generate_episode_with_policy(env, policy, eps)
states, actions, rewards = zip(*episode)
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
rewards = np.array(rewards)
for i, (state, action) in enumerate(zip(states, actions)):
G = sum(rewards[i:] * discounts[:-(i+1)])
Q[state][action] *= (1 - alpha)
Q[state][action] += alpha * G
policy= {k:np.argmax(Q[k]) for k in Q}
eps -= eps_delta
return policy, Q
###Output
_____no_output_____
###Markdown
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
###Code
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 50000, .2)
###Output
Episode 50000/50000.
###Markdown
Next, we plot the corresponding state-value function.
###Code
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
###Output
_____no_output_____
###Markdown
Finally, we visualize the policy that is estimated to be optimal.
###Code
# plot the policy
plot_policy(policy)
###Output
_____no_output_____ |
K-Nearest Neighbors/K-Nearest-Neighbor_Practice_06.12.2020.ipynb | ###Markdown
Preliminaries
###Code
import numpy as np
import pandas as pd
import statsmodels.api as sm
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale, StandardScaler
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score
from sklearn.metrics import confusion_matrix, accuracy_score, mean_squared_error, r2_score, roc_auc_score, roc_curve, classification_report
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Load Dataset
###Code
df=pd.read_pickle("saved_df.pkl")
###Output
_____no_output_____
###Markdown
Explore Data
###Code
df.head()
df.shape
df.corr()["Outcome"].sort_values().plot.barh()
##Create the features matrix and Create the target vector
X=df.drop(["Outcome"], axis=1)
y=df["Outcome"]
##Split Into Training And Test Sets
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=0)
###Output
_____no_output_____
###Markdown
**Logistic Regression**
###Code
##Create the Logistic Model
log_model=LogisticRegression()
#Fit the model
log_model.fit(X_train,y_train)
#Predict the test set
y_pred=log_model.predict(X_test)
#Evaluate model performance
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
###Output
[[121 16]
[ 34 42]]
precision recall f1-score support
0 0.78 0.88 0.83 137
1 0.72 0.55 0.63 76
accuracy 0.77 213
macro avg 0.75 0.72 0.73 213
weighted avg 0.76 0.77 0.76 213
###Markdown
**K-Nearest Neighbor**
###Code
X_train.head()
X_train.describe()
#Standardize Features
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
a=pd.DataFrame(X_train, columns=X.columns)
a.head()
a.head()
#Create and fit the Model
knn_model=KNeighborsClassifier().fit(X_train, y_train)
#Predict the test set
y_pred=knn_model.predict(X_test)
#Evaluate model performance
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(knn_model,X_test,y_test, values_format=".2f")
###Output
_____no_output_____
###Markdown
**Model Tunning**
###Code
knn = KNeighborsClassifier()
np.arange(1,50)
knn_params = {"n_neighbors": np.arange(1,50)}
knn_cv_model = GridSearchCV(knn, knn_params, cv=10).fit(X_train, y_train)
knn_cv_model.best_params_
knn_tuned = KNeighborsClassifier(n_neighbors = 15).fit(X_train, y_train)
y_pred = knn_tuned.predict(X_test)
confusion_matrix(y_test, y_pred)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
0 0.76 0.88 0.81 137
1 0.70 0.49 0.57 76
accuracy 0.74 213
macro avg 0.73 0.69 0.69 213
weighted avg 0.74 0.74 0.73 213
###Markdown
**GridSearch with recall**
###Code
knn_cv_model = GridSearchCV(knn, knn_params, scoring = "recall", cv=10).fit(X_train, y_train)
knn_cv_model.best_params_
knn_tuned_2 = KNeighborsClassifier(n_neighbors = 3).fit(X_train, y_train)
y_pred = knn_tuned_2.predict(X_test)
confusion_matrix(y_test, y_pred)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
0 0.76 0.85 0.80 137
1 0.65 0.51 0.57 76
accuracy 0.73 213
macro avg 0.70 0.68 0.69 213
weighted avg 0.72 0.73 0.72 213
###Markdown
**Model Deployment**
###Code
X=df.drop(["Outcome"], axis=1)
y=df["Outcome"]
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=42)
log_model=LogisticRegression()
log_model.fit(X_train,y_train)
y_pred=log_model.predict(X_test)
confusion_matrix(y_test, y_pred)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
0 0.80 0.84 0.82 146
1 0.60 0.54 0.57 67
accuracy 0.74 213
macro avg 0.70 0.69 0.69 213
weighted avg 0.74 0.74 0.74 213
###Markdown
**saving the model**
###Code
import pickle
pickle.dump(log_model, open("my_model", 'wb'))
model = pickle.load(open("my_model", "rb"))
###Output
_____no_output_____
###Markdown
**predictions with the saved model**
###Code
prediction = model.predict(X)
prediction[:5]
df.head()
df["Pred"]=model.predict(X)
df.sample(10)
pred_prob = model.predict_proba(X)
pred_prob[:5][:,1]
df["Prob"]=pred_prob[:,1]
df.sample(10)
###Output
_____no_output_____
###Markdown
**prediction for a single patient**
###Code
X.columns
my_dict={'Pregnancies':10,
'Glucose':180,
'BloodPressure':70,
'SkinThickness':30,
'Insulin':50,
'BMI':38,
'DiabetesPedigreeFunction':0.15,
'Age':50}
df_sample=pd.DataFrame([my_dict])
df_sample
single_pred=model.predict(df_sample)
print(single_pred)
single_pred_prob=model.predict_proba(df_sample)
print(single_pred_prob[:,1])
###Output
[0.79344574]
|
examples/morse_exp6_NVT/morse_exp6_NVT.ipynb | ###Markdown
FunUQ for MD Sam Reeve and Alejandro Strachan Replication of: Reeve, S. T. & Strachan, A. Quantifying uncertainties originating from interatomic potentials in molecular dynamics. (Submitted to Modell. Simul. Mater. Sci. Eng. 2018). NVT Morse / Exponential-6 at 1500K and 1 atom (NOT pre-run simulations) This notebook goes through all steps of functional uncertainty quantification FunUQ for interatomic potential in molecular dynamics, matching one case from the paper. The main steps are: * Define folders, simulation system, and models * (Run simulations) * Calculate functional derivatives * Calculate correction for quantities of interest due to changing from one function to another
###Code
import sys, os, numpy as np
# Relative path from notebook to module
sys.path.insert(0, '../../lib/')
sys.path.insert(0, '../../lib/FunUQ/')
# Import FunUQ module
from FunUQ import *
# Provides access to nanoHUB simulation codes (LAMMPS)
from hublib import use
# Utility functions (Austin Zadoks)
from nH_utils import *
% use lammps-09Dec14
% matplotlib notebook
# "True" will run new simulations below
# Change after first usage to only analyze results
run_main = True
run_verify = True
run_perturb = False
run_bruteforce = False
###Output
_____no_output_____
###Markdown
System setup: define interatomic potentials and quantities of interest
###Code
rundir = os.getcwd()
startdir = os.path.abspath(os.path.join(rundir, 'init/'))
mainname = 'main' # morse
correctname = 'exp6'
Pot_main = Potential('morse', paramdir=startdir, create=True, N=7000, rmax=7.0, cut=6.0)
Pot_correct = Potential('exp6', paramdir=startdir, create=True, N=7000, rmax=7.0, cut=6.0)
ax1 = Pot_main.plot()
ax1 = Pot_correct.plot(ax=ax1, color='red')
QoI_list = ['PotEng', 'Press']
Nqoi = len(QoI_list)
QoI_dict = {'description': 'Replication of Reeve and Strachan, (Submitted 2018)',
'Ncopies': 2,
'units': ['eV/atom', 'GPa'],
#'overwrite': True,
}
QoI = QuantitiesOfInterest(QoI_list, Pot_main,
startdir, rundir, mainname, 'metal',
input_dict=QoI_dict)
QoI_correct = QuantitiesOfInterest(QoI_list, Pot_correct,
startdir, rundir, correctname, 'metal',
input_dict=QoI_dict)
###Output
_____no_output_____
###Markdown
Run simulations or extract results
###Code
if run_main:
QoI.run_lammps(mode='nanoHUB_submit') # 'nanoHUB_local'
if run_verify:
QoI_correct.run_lammps(mode='nanoHUB_submit')
submit_status()
#kill_jobs('') # Use RunName
#kill_all_jobs()
local_status(rundir, [mainname, correctname])
QoI.extract_lammps()
QoI_correct.extract_lammps()
print(QoI); print(QoI_correct)
###Output
_____no_output_____
###Markdown
Calculate functional derivatives
###Code
FD_dict = {'alist': [-1e-8, -2e-8, 1e-8, 2e-8],
}
FuncDer = FuncDer_perturb_coord(QoI, Pot_main,
input_dict=FD_dict)
if run_bruteforce and FuncDer.method == 'bruteforce':
FuncDer.run_lammps()
elif run_perturb and FuncDer.method == 'perturbative_allatom':
FuncDer.rerun_gauss()
FuncDer.prepare_FD()
FuncDer.calc_FD()
for x in range(Nqoi):
FuncDer.write_FD(x)
FuncDer.plot_FD(x)
FuncDer.plot_perturb(0)
###Output
_____no_output_____
###Markdown
Correct quantities of interest
###Code
Correct = FunUQ(Pot_main, Pot_correct, QoI.Q_names, QoI.Qavg, QoI_correct.Qavg,
Q_units=QoI.units, FD=FuncDer.funcder, R=FuncDer.rlist)
###Output
_____no_output_____
###Markdown
Compare this plot to similar case in Reeve & Strachan 2018, Figure 3
###Code
Correct.discrepancy()
Correct.plot_discrep()
Correct.correct()
###Output
_____no_output_____
###Markdown
Compare this plot to similar case in Reeve & Strachan 2018, Figure 3
###Code
for x in range(Nqoi):
Correct.plot_funcerr(x)
Correct.plot_correction(x)
###Output
_____no_output_____ |
01_Introduction.ipynb | ###Markdown
Welcome to Python> _"This is my editor. There are many like it but this one is mine."_>> -- Rifleman's creed (paraphrased) 1. WelcomeWelcome and congratulations! Learning how to write programs is a daunting but extremely useful skill and you've all taken the first step. It will be a long journey but you will soon notice that the skills you acquire along the way can already be put to use. This course is designed as an introduction to programming **and** and introduction to the Python language. We do not expect you to be able to weild Python independently after this course, but have a solid foundation to continue learning and applying your new skills. Without further a-do, let us get you acquainted with your programming environment: Jupyter! 2. Introduction to Jupyter1 Jupyter is an *interactive programming environment*.**a. Start Jupyter**- In Anaconda Navigator, click "Jupyter Notebook"- Navigate to the folder where you saved the course materials**b. Make a new notebook** Navigate to a folder and click on the right `New` → `Notebook`. A new Notebook now pops up with an empty cell. In this cell you can directly input some Python code. Try out the following: ```python1+1```Click on the triangle symbol on the top of the notebook or type 'Shift+Enter' to run the code. The output will immediately appear on the screen and should look like this. Also, a new cell will have appeared in the notebook. A notebook is actually a set of cells in which you can input code. If you want another cell, you can click the '+' symbol on top of the notebook. Other interesting symbols up there are the stop symbol and the reload symbol. Whenever your code is stuck, you can stop it right there, or whenever you want to restart in a clean and fresh environment, you hit that restart button. **c. Running a cell**To stress the importance of the 'stop' button on top of this notebook, run the following code below. While it is running, the code has an asterisk which means it's still being executed and your notebook won't be able to process any other code in another cell. In order to stop it, because it's an indefinite loop, hit the stop button or type 'ii' in command mode.
###Code
import time
for second in [1, 2, 3]:
print(f"Hello {second}")
time.sleep(1)
###Output
_____no_output_____
###Markdown
2.1 ExamplesThe above will suffice for the Jupyter Notebook introduction. We will dive into our first examples before diving into the first chapter of our Python adventure. A program needs information (input) to run, and then needs to export its results so that you know what happened (output). The easiest way to do this is to send a 'text message' to the screen; this is possible with the print command which we will introduce here.In this section we also discuss some basics of Python syntax, and the errors that occur if you don't get it right.**a. Let's do some maths**Python is very intuitive and flexible in a way that there is no need of special colons, nor do you have to take spaces into account. Just note that Python is indent-sensitive, but we will get back to this.
###Code
1 + 1
2 - 5
3 * 4
10 / 2
###Output
_____no_output_____
###Markdown
2.2 HELP!!!You can quickly access Python documentation for what you're working on with `help()`. For example:
###Code
help(print)
###Output
_____no_output_____
###Markdown
Messy way, opens a dialog, crashes```pythonwindow_name='image'cv2.namedWindow(window_name, cv2.WINDOW_NORMAL)cv2.imshow(window_name,img)cv2.waitKey(0)cv2.destroyAllWindows()```
###Code
img2 = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img2)
plt.title('apple')
plt.show()
###Output
_____no_output_____
###Markdown
Working with Grayscale
###Code
img3 = cv2.imread('images/apple.jpg', 0) # Read in grayscale: cv2.IMREAD_GRAYSCALE = 0
plt.imshow(img3, cmap='gray') #cmap='gray' to avoid wierd colours
plt.title('apple grey')
plt.show()
cv2.IMREAD_GRAYSCALE
###Output
_____no_output_____
###Markdown
Introduction to Modern, Web-based Methodologies Learning objectives- What are modern, web-based image analysis methods?- Why are web-based methods relevant for large images and reproducibility?- How does open source software fit into the computational ecosystem? Modern, web-based image analysis and visualization What is modern, web-based imaging analysis and visualization?1. The user interface is **web browser**2. Defacto communication on the internet with the **HTTPS** protocol3. Computation can happen in the **cloud** Important Technologies Evergreen browsers![Evergreen browsers](./images/evergreen-browsers.jpg)An **evergreen browser** releases frequently, striving for up-to-date and comprehensive support for *[web standards](https://en.wikipedia.org/wiki/Web_standards)*.Modern evergreen browsers include:- Google Chrome- Mozilla Firefox- Microsoft Edge- Opera- Apple SafariNote that Chrome, Edge, and Opera are all based on the same open source Chromium foundation. Safari often lags behind or is limited in standard support. Internet Explorer is no longer supported by Microsoft as of August 17, 2021, but lacks support for many Modern Web standard features. Programming Languages Client-side programming languages![JavaScript](./images/js_js_js.png)Client-side programming languages are languages that run web browser page. **JavaScript (JS)** is the *language of the web.* A working knowledge of JS is [a useful skill](https://www.youtube.com/watch?v=dFUlAQZB9Ng).![JS](./images/js.png)- JavaScript is the only language with ubiquitous support across modern web browsers.- Modern JavaScript runtimes are highly engineered and very performant.- JavaScript can also executed server side -- [*Node.js*](https://nodejs.org/en/) is the most popular server-side runtime.- JavaScript has one of the largest software package ecosystems: the [Node package manager, NPM, the npm registry, and the npm CLI](https://www.npmjs.com/).- The JavaScript language standard, [ECMAScript](https://en.wikipedia.org/wiki/ECMAScript), is a rapidly evolving, modern language. The most recent version is ES2020.- Most client-side code deployed on websites, whether written in JS or another language, is transpiled down to ECMAScript version 5 with Node.js.- The [Mozilla Developer Network (MDN)](https://developer.mozilla.org/en-US/docs/Web/JavaScript) is the best place to find JavaScript-related documentation. **TypeScript (TS)** is a superset of JavaScript that adds optional static type definitions.![TS](./images/ts.svg)- Typescript is a popular alternative to JavaScript for writing client-side code.- When Typescript is transpiled to JavaScript for deployment, static type checking occurs.- In addition to compilation error checking, explicit types in interfaces and other typed language advantages are available, e.g. IDE features.- Many language features that used to be unique to TypeScript, e.g. classes, have now been adopted into the JavaScript language standard. Other languages can be compiled into **WebAssembly (Wasm)**, a portable compilation target enabling deployment on the web for client and server applications.![Wasm](./images/wasm.png)- WebAssembly is efficient and fast.- Wasm can be transmitted in a binary format.- Wasm supports hardware capabilities such as [atomics](https://courses.cs.washington.edu/courses/cse378/07au/lectures/L25-Atomic-Operations.pdf) and [SIMD](https://en.wikipedia.org/wiki/SIMD).- Wasm focuses on secure, memory-safe sandboxed execution in the browser.- Wasm runs on the same virtual machine as JavaScript, and it is debuggable in browsers.- Wasm is part of the open web platform, supported by all major browsers.- Wasm aims to be backwards compatible.- A JavaScript API is provided ito interface with WebAssembly.Performant image processing code can be written in the following languages, and compiled to Wasm:- C/C++ via [Emscripten](https://emscripten.org/)- Rust- Java Other browser **web standards** *important for scientific imaging* include: ![WebGL](./images/webgl.svg)- **[WebGL](https://www.khronos.org/webgl/)**: a *web standard for a low-level 3D graphics API based on OpenGL ES, exposed to ECMAScript via the HTML5 Canvas.*- **[WebGPU](https://gpuweb.github.io/gpuweb/)**: *an API that exposes the capabilities of GPU hardware for the Web. The API is designed from the ground up to efficiently map to the Vulkan, Direct3D 12, and Metal native GPU APIs. WebGPU is not related to WebGL and does not explicitly target OpenGL ES.*- **Web Worker's**: Runs JavaScript in background threads separate from a web page's main thread. Useful for parallelism.- **ServiceWorker's**: another *worker* that acts as a proxy between the web browser page and the network. It can be used to *cache assets* to provide fast web apps with *offline* functionality. Server-side programming languages for web-based imaging include:- *Python*- Java- Rust- JavaScript- C++ Web-based storageDiscussed in the [data storage](./04_Data_Storage.ipynb) tutorial. Cloud computeDiscussed in the [distributed image processing](./05_Distributed_Processing.ipynb) tutorial. CommunicationCommunication web standard for scientific imaging include:- **REST**: *[Representational state transfer (REST)](https://en.wikipedia.org/wiki/Representational_state_transfer) is a software architectural style that defines a set of constraints to be used for creating Web services. HTTP-based RESTful APIs are defined with the following aspects: a base URI, such as http://api.example.com/, standard HTTP methods (e.g., GET, POST, PUT, and DELETE),and, a media type that defines state transition data element.*- **WebSockets** - a two-way communication channel that is more performant than HTTP requests. Used by Jupyter, Colab, etc.- **[IPFS](https://en.wikipedia.org/wiki/InterPlanetary_File_System)** - *The InterPlanetary File System (IPFS) is a protocol and peer-to-peer network for storing and sharing data in a distributed file system.* Web-based methods, large images, and reproducibility **Why are web-based methods uniquely appropriate for working with extremely large images?** - Natural approach to remote storage - Helpful when data needs to be stored near the microscope or in the *cloud* - **Warning**: however, data-compute locality is often critical for performance!- Partial image chunks can be fetched on demand for analysis or visualization- Compression is a natural and commonplace component **Why do web-based methods uniquely support open science and reproducibility?**- Truely write once, run everywhere- Backwards and forwards compatibility- A standards-based ecosystem- Distributed compute and storage resources, which improves sustainability
###Code
# A C++ image denoising method compiled to JavaScript over five years ago.
#
# No maintenance required, hosted on free resources, and executed by the client.
import IPython
url = 'https://insightsoftwareconsortium.github.io/ITKAnisotropicDiffusionLBR/'
width = 800
height = 1000
IPython.display.IFrame(url, width, height)
###Output
_____no_output_____
###Markdown
Modern methods and traditional open source imaging software**How to modern web-based methods extend and interface with traditional open source scientific imaging software?**- **ImJoy** - [ImJoy](https://imjoy.io/docs//) is a plugin-powered hybrid computing platform for deploying deep learning applications such as advanced image analysis tools. - JavaScript or Python-based plugins. - Take the dedicated [I2K ImJoy tutorial](https://www.janelia.org/sites/default/files/You%20%2B%20Janelia/Conferences/10.pdf). - Read [the paper](https://rdcu.be/bYbGO).- [ImageJ.js](https://ij.imjoy.io/) - ImageJ compiled to WebAssembly and exposed with ImJoy. - [itk.js](https://insightsoftwareconsortium.github.io/itk-js/index.html): itk.js combines Emscripten and ITK to enable high-performance spatial analysis in a JavaScript runtime environment.- [pyodide](https://github.com/iodide-project/pyodide/): CPython scientific Python libraries, e.g. scikit-image, compiled to WebAssembly.- [Jupyter](https://jupyter.org/): browser-based literate programming, combining interactive code, equations, and visualizations - [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/): next generation browser interface, more JavaScript focus. - [Colab](https://colab.research.google.com/notebooks/intro.ipynbrecent=true): alternative Jupyter notebook interface with GPGPU hardware backends for deep learning and Google integrations. - [Voila](https://github.com/voila-dashboards/voila): quickly turn Jupyter notebooks into standalone web applications. Exercise: Learn about the web-browser development environment! Exercise 1: JavaScript Hello World!
###Code
%%javascript
console.log('Hello web world!')
###Output
_____no_output_____
###Markdown
Welcome to the ACM Machine Learning Subcommittee! Python libraries we will be using:* Numpy - store and manipulate numerical data efficiently* Scikit-Learn - training and evaluating models - Fetching datasets* Matplotlib - pretty pictures - pyplot - MATLAB-like syntax in python
###Code
import numpy as np
import pandas as pd
import sklearn
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Python 3 Basics Lists [ ]* https://docs.python.org/3/tutorial/datastructures.html
###Code
# list: similar to an array in other languages, but can store different types
x = []
# append() method to insert at the end. insert() method to insert at an index
x.append(1)
x.append(3.14)
x.append("hello")
print(x)
# assign x to a different list object
y = ['cats', 'dogs', 'birds']
print(y)
# extend method to join two lists
x.extend(y)
print(x)
# More advanced: List Comprehensions
# equivalent to creating an empty list and populating in a for-loop
z = [i*i for i in range(10)]
print(z)
###Output
[1, 3.14, 'hello']
['cats', 'dogs', 'birds']
[1, 3.14, 'hello', 'cats', 'dogs', 'birds']
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
###Markdown
List Slicing
###Code
# syntax: array[start:stop]
print(z)
print(z[1:5])
print(z[4:9])
# start/stop are optional. Beginning/End of the array assumed
print()
print(z[:]) # full array
print(z[3:]) # 3rd index -> end of array
print(z[:4]) # first 4 items
###Output
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
[1, 4, 9, 16]
[16, 25, 36, 49, 64]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
[9, 16, 25, 36, 49, 64, 81]
[0, 1, 4, 9]
###Markdown
Dictionaries { }* Store (key,value) pairs
###Code
inventory = {'carrots' : 10,
'tomatoes' : 5,
'bananas' : 13}
print(inventory)
# add a key,value pair
inventory['apples'] = 7
# modify a key's value
inventory['tomatoes'] = 6
# remove an entry
del inventory['carrots']
# loop through entries
for key, value in inventory.items():
print("We have {} {}".format(value, key))
###Output
{'carrots': 10, 'tomatoes': 5, 'bananas': 13}
We have 6 tomatoes
We have 13 bananas
We have 7 apples
###Markdown
Numpy PrimerMain data structure: ndarray* Multidimensional array of same numeric typehttps://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.html
###Code
zero_matrix = np.zeros(shape=(3,3))
ones_matrix = np.ones(shape=(3,5))
rand_matrix = np.random.rand(5,2)
print(zero_matrix, zero_matrix.shape)
print(ones_matrix, ones_matrix.shape)
print(rand_matrix, rand_matrix.shape)
mult_matrix = np.dot(ones_matrix, rand_matrix) # dot product == matrix-matrix multiply, matrix-vector multiply
print(mult_matrix, mult_matrix.shape) # (3 x 5) * (5 x 2) => (3 x 2)
###Output
[[2.31763523 2.35232294]
[2.31763523 2.35232294]
[2.31763523 2.35232294]] (3, 2)
###Markdown
Array slicingShorthand syntax for accessing sub-arrays by 'slicing' along the array's dimensionsSuppose we only want rows 3 (inclusive) to 5 (exclusive) and columns 4 to 7. We would use the following line array[3:5, 4:7]
###Code
# array[start : stop]
# array[x_start : x_stop, y_start : y_stop, ...]
# if start and stop are not given, then the beginning
# and end of that array (or that array's dimensions) are assumed
print(zero_matrix[:,:]) # the full matrix
print()
print(zero_matrix[2,:]) # just the bottom row
print()
print(ones_matrix[0, 2:5]) # row 0, columns 2,3,4 => shape=(1,3)
print()
print(rand_matrix[:3, 0:]) # rows 0,1,2, columns 0,1 => shape=(3,2)
###Output
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
[0. 0. 0.]
[1. 1. 1.]
[[0.24116348 0.58869535]
[0.48465559 0.47192063]
[0.55230605 0.78650489]]
###Markdown
Matplotlib* Matplotlib.pyplot - MATLAB-like plotting syntax https://matplotlib.org/api/pyplot_api.html* We give pyplot numpy arrays, and it plots them
###Code
x = np.arange(0, 5, 0.1) # another useful numpy function - gives us a 1-D array from 0 to 5, step-size=0.1
y = np.sin(x) # Pass in an array of input values, and get an array of the same shape
print(x.shape, y.shape)
plt.plot(x,y)
plt.show()
plt.scatter(rand_matrix[:,0], rand_matrix[:,1])
plt.show()
# plot options
y1 = np.cos(x)
y2 = np.tanh(x)
plt.plot(x, y2, 'm|' , label='tanh')
plt.plot(x, y1, 'g+', label='cos')
plt.plot(x, y, 'r--', label='sin')
plt.legend()
plt.show()
# We'll be using this function to visualize image data - very handy!
plt.imshow(rand_matrix, cmap='Greys')
plt.show()
###Output
_____no_output_____
###Markdown
Data UsedThe data used has been sourced from the following:- https://www.kaggle.com/zynicide/wine-reviews- https://www.kaggle.com/unsdsn/world-happinessin some cases it has been modified for size/training purposes Useful Links [Installation](https://pandas.pydata.org/pandas-docs/stable/install.html): Official Pandas Installation Guide [Basic Python](https://www.kaggle.com/learn/python): Free Python introductory course with tutorial and exercises [Basic Jupyter Notebook](https://dzone.com/articles/getting-started-with-jupyterlab): Introductory tutorial on the use of Jupyter Lab [Pandas cheatsheet](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf): Very useful reference guide to the main features of pandas Library versionThe library version used for this tutorial (and it's dependencies) are the followingsThe environment specifications can be found in `environment.yml`
###Code
import pandas as pd
pd.show_versions()
###Output
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.4.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 58 Stepping 9, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.None
pandas : 0.25.0
numpy : 1.16.4
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 41.6.0.post20191030
Cython : None
pytest : 5.2.4
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : 7.9.0
pandas_datareader: None
bs4 : 4.8.0
bottleneck : 1.2.1
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : 3.1.1
numexpr : 2.7.0
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 0.15.1
pytables : None
s3fs : None
scipy : 1.3.1
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
###Markdown
Oddstradamus Good odds and where to find them Introduction In the long run, the bookmaker always wins. The aim of this project is to disprove exactly this. We are in the football sports betting market and are trying to develop a strategy that is profitable in the long term and which will make the bookmaker leave the pitch as the loser. There are three aspects to this strategy that need to be optimised. These are:- the selection of suitable football matches- the prediction of the corresponding outcome- and the determination of the optimal stake per bet.In order to achieve this goal, a data set is compiled containing data from almost 60,000 football matches from 22 different leagues. This data set is processed, evaluated and then used to develop the long-term strategy with the help of selected machine learning algorithms. The data comes from the following source: [Data source](https://www.football-data.co.uk/downloadm.php) Merging the data The first step is to read the data from 264 .csv files and combine them appropriately. Before the data set is saved, an additional column with information about the season of the match is created to ensure a unique allocation.
###Code
# import packages
import glob
import os
import pandas as pd
# loading the individual datasets of the different seasons
file_type = 'csv'
seperator =','
df_20_21 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('20:21' + "/*."+file_type)],ignore_index=True)
df_19_20 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('19:20' + "/*."+file_type)],ignore_index=True)
df_18_19 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('18:19' + "/*."+file_type)],ignore_index=True)
df_17_18 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('17:18' + "/*."+file_type)],ignore_index=True)
df_16_17 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('16:17' + "/*."+file_type)],ignore_index=True)
df_15_16 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('15:16' + "/*."+file_type)],ignore_index=True)
df_14_15 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('14:15' + "/*."+file_type)],ignore_index=True)
df_13_14 = pd.concat([pd.read_csv(f, sep=seperator) for f in glob.glob('13:14' + "/*."+file_type)],ignore_index=True)
# add a column of the season for clear assignment
df_20_21['Season'] = '20/21'
df_19_20['Season'] = '19/20'
df_18_19['Season'] = '18/19'
df_17_18['Season'] = '17/18'
df_16_17['Season'] = '16/17'
df_15_16['Season'] = '15/16'
df_14_15['Season'] = '14/15'
df_13_14['Season'] = '13/14'
# combining the individual datasets into one
dfs = [df_14_15, df_15_16, df_16_17, df_17_18, df_18_19, df_19_20, df_20_21]
results = df_13_14.append(dfs, sort=False)
# saving the merged dataframe for processing
results.to_csv("Data/Results2013_2021.csv")
###Output
_____no_output_____
###Markdown
Quick Overview
###Code
# output of the data shape
results.shape
###Output
_____no_output_____
###Markdown
In its initial state, the data set comprises almost 60000 rows and 133 columns. In addition to information on league affiliation, the season of the match and the team constellation, information on the final result is available in the form of the number of goals, shots, shots on target, corners, fouls and yellow and red cards for home and away teams. In addition, the dataset contains information on betting odds from a large number of bookmakers.As a large proportion of the columns are only sporadically filled, especially with regard to the betting odds, those bookmakers whose odds are available for the 60,000 matches were filtered. This procedure alone reduced the data set from 133 to 31 columns.
###Code
# selecting the necessary columns of the original data set
results = results[['Div', 'Season', 'HomeTeam','AwayTeam', 'FTHG', 'FTAG', 'FTR', 'HS', 'AS', 'HST', 'AST', 'HF', 'AF', 'HC',
'AC', 'HY', 'AY', 'HR', 'AR','B365H','B365D','B365A', 'BWH','BWD','BWA', 'IWH', 'IWD', 'IWA', 'WHH', 'WHD', 'WHA']]
results.shape
###Output
_____no_output_____
###Markdown
Introduction> A quick intro to sperm whale vocalizations Sperm whales are magnificent creatures. They are the largest predator to inhabit planet Earth. They are also the loudest animal - they can produce vocalizations at around 236 db (that is louder than a jet engine)!They can hear each other from thousands of miles away and some researchers believe that they are able to keep in contact with one another on other sides of the planet.Katie Zacarian swimming with Hope, a juvenile Sperm Whale (Physeter macrocephalus).Location: Waters of the Eastern Carribean Sea, near the island of Dominica.Photo: Keri WilkTheir vocalizations are called clicks and this is what they sound like:
###Code
from IPython.lib.display import Audio
import librosa
x, rate = librosa.load('data/audio/72009002.wav', sr=None)
Audio(x, rate=rate)
# a recording from the ‘Best Of’ cuts from the William A. Watkins Collection of Marine Mammal Sound Recordings database
# from Woods Hole Oceanographic Institution (https://cis.whoi.edu/science/B/whalesounds/index.cfm)
###Output
_____no_output_____
###Markdown
We can take a look at waveplot to better understand what is going on.
###Code
import librosa.display
librosa.display.waveplot(x, rate)
###Output
_____no_output_____ |
build/_downloads/365b87796db17e02797092ebfba32df9/autograd_tutorial_old.ipynb | ###Markdown
Autograd========Autograd is now a core torch package for automatic differentiation.It uses a tape based system for automatic differentiation.In the forward phase, the autograd tape will remember all the operationsit executed, and in the backward phase, it will replay the operations.Tensors that track history--------------------------In autograd, if any input ``Tensor`` of an operation has ``requires_grad=True``,the computation will be tracked. After computing the backward pass, a gradientw.r.t. this tensor is accumulated into ``.grad`` attribute.There’s one more class which is very important for autogradimplementation - a ``Function``. ``Tensor`` and ``Function`` areinterconnected and build up an acyclic graph, that encodes a completehistory of computation. Each variable has a ``.grad_fn`` attribute thatreferences a function that has created a function (except for Tensorscreated by the user - these have ``None`` as ``.grad_fn``).If you want to compute the derivatives, you can call ``.backward()`` ona ``Tensor``. If ``Tensor`` is a scalar (i.e. it holds a one elementtensor), you don’t need to specify any arguments to ``backward()``,however if it has more elements, you need to specify a ``grad_output``argument that is a tensor of matching shape.
###Code
import torch
###Output
_____no_output_____
###Markdown
Create a tensor and set requires_grad=True to track computation with it
###Code
x = torch.ones(2, 2, requires_grad=True)
print(x)
print(x.data)
print(x.grad)
print(x.grad_fn) # we've created x ourselves
###Output
_____no_output_____
###Markdown
Do an operation of x:
###Code
y = x + 2
print(y)
###Output
_____no_output_____
###Markdown
y was created as a result of an operation,so it has a grad_fn
###Code
print(y.grad_fn)
###Output
_____no_output_____
###Markdown
More operations on y:
###Code
z = y * y * 3
out = z.mean()
print(z, out)
###Output
_____no_output_____
###Markdown
``.requires_grad_( ... )`` changes an existing Tensor's ``requires_grad``flag in-place. The input flag defaults to ``True`` if not given.
###Code
a = torch.randn(2, 2)
a = ((a * 3) / (a - 1))
print(a.requires_grad)
a.requires_grad_(True)
print(a.requires_grad)
b = (a * a).sum()
print(b.grad_fn)
###Output
_____no_output_____
###Markdown
Gradients---------let's backprop now and print gradients d(out)/dx
###Code
out.backward()
print(x.grad)
###Output
_____no_output_____
###Markdown
By default, gradient computation flushes all the internal bufferscontained in the graph, so if you even want to do the backward on somepart of the graph twice, you need to pass in ``retain_variables = True``during the first pass.
###Code
x = torch.ones(2, 2, requires_grad=True)
y = x + 2
y.backward(torch.ones(2, 2), retain_graph=True)
# the retain_variables flag will prevent the internal buffers from being freed
print(x.grad)
z = y * y
print(z)
###Output
_____no_output_____
###Markdown
just backprop random gradients
###Code
gradient = torch.randn(2, 2)
# this would fail if we didn't specify
# that we want to retain variables
y.backward(gradient)
print(x.grad)
###Output
_____no_output_____
###Markdown
You can also stops autograd from tracking history on Tensorswith requires_grad=True by wrapping the code block in``with torch.no_grad():``
###Code
print(x.requires_grad)
print((x ** 2).requires_grad)
with torch.no_grad():
print((x ** 2).requires_grad)
###Output
_____no_output_____ |
sagemaker-pipelines/tabular/custom_callback_pipelines_step/sagemaker-pipelines-callback-step.ipynb | ###Markdown
Glue ETL as part of a SageMaker pipelineThis notebook will show how to use the [Callback Step](https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-steps.htmlstep-type-callback) to extend your SageMaker Pipeline steps to include tasks performed by other AWS services or custom integrations. For this notebook, you'll learn how to include a Glue ETL job as part of a SageMaker ML pipeline. The overall flow will be:* Define Glue ETL job* Run Spark data preparation job in Glue* Run ML training job on SageMaker* Evaluate ML model performance The pipeline sends a message to an SQS queue. A Lambda function responds to SQS and invokes an ECS Fargate task. The task will handle running the Spark job and monitoring for progress. It'll then send the callback token back to the pipeline.![CustomStepPipeline](./images/pipelinescustom.png) Data setWe'll use the Yellow Taxi records from [NYC](https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page) in 2020. In this [blog](https://aws.amazon.com/blogs/machine-learning/use-the-built-in-amazon-sagemaker-random-cut-forest-algorithm-for-anomaly-detection/), we used a prepared version of the data that had passenger counts per half hour. In this notebook we'll take the raw NYC data and prepare the half-hour totals. One-time setupThis notebook needs permissions to:* Create Lambda functions* Create an ECS cluster* Upload images to ECR* Create IAM roles* Invoke SageMaker API for pipelines* Create security groups* Write data into S3* Create security groups* Describe VPC informationIn a production setting, we would deploy a lot of these resources using an infrastructure-as-code tool like CloudFormation or the CDK. But for simplicity in this demo we'll create everything in this notebook. Setup prerequisite IAM roles First we need to create the following IAM roles:* A role for the ECS Fargate task and task runner. Besides the usual policies that allow pulling images and creating logs, the task needs permission to start and monitor a Glue job, and send the callback token to SageMaker. Because the specific SageMaker action isn't visible in IAM yet, for now we give the task full SageMaker permissions.* A role for Glue with permissions to read and write from our S3 bucket.* A role for Lambda with permissions to run an ECS task, send the failure callback if something goes wrong, and poll SQS.For your convenience, we have prepared the setup_iam_roles.py script to help create the IAM roles and respective policies. In most cases, this script will be run by administrator teams, on behalf of data scientists.
###Code
import sagemaker
from setup_iam_roles import create_glue_pipeline_role
from setup_iam_roles import create_lambda_sm_pipeline_role
from setup_iam_roles import create_ecs_task_role, create_task_runner_role
sagemaker_session = sagemaker.session.Session()
default_bucket = sagemaker_session.default_bucket()
ecs_role_arn = create_ecs_task_role(role_name="fg_task_pipeline_role")
task_role_arn = create_task_runner_role(role_name="fg_task_runner_pipeline_role")
glue_role_arn = create_glue_pipeline_role(role_name="glue_pipeline_role", bucket=default_bucket)
lambda_role_arn = create_lambda_sm_pipeline_role(
role_name="lambda_sm_pipeline_role", ecs_role_arn=ecs_role_arn, task_role_arn=task_role_arn
)
###Output
_____no_output_____
###Markdown
ProcessingSetup the configurations & tasks that will be used to process data in the pipeline. Set up ECS Fargate clusterThe ECS Fargate cluster will be used to execute a Fargate task that will handle running the Spark data pre-processing in Glue and monitoring for progress. This task is invoked by a Lambda function that gets called whenever the CallbackStep puts a message to SQS.**Pipeline Step Tasks:** *CallbackStep -> SQS -> Lambda -> Fargate Task -> Glue Job*
###Code
import boto3
ecs = boto3.client("ecs")
response = ecs.create_cluster(clusterName="FargateTaskRunner")
print(f"Cluster Name: {response['cluster']['clusterName']}")
print(f"Cluster ARN: {response['cluster']['clusterArn']}")
print(f"Cluster Status: {response['cluster']['status']}")
cluster_arn = response["cluster"]["clusterArn"]
###Output
_____no_output_____
###Markdown
Build container image for Fargate taskFirst, install the Amazon SageMaker Studio Build CLI convenience package that allows you to build docker images from your Studio environment. Please ensure you have the pre-requisites in place as outlined in this [blog](https://aws.amazon.com/blogs/machine-learning/using-the-amazon-sagemaker-studio-image-build-cli-to-build-container-images-from-your-studio-notebooks/).
###Code
import sys
!{sys.executable} -m pip install sagemaker_studio_image_build
###Output
_____no_output_____
###Markdown
Next, write the code to your local environment that will be used to build the docker image. **task.py:** This code will be used by the task runner to start and monitor the Glue job then report status back to SageMaker Pipelines via *send_pipeline_execution_step_success* or *send_pipeline_execution_step_failure*
###Code
!mkdir container
%%writefile container/task.py
import boto3
import os
import sys
import traceback
import time
if "inputLocation" in os.environ:
input_uri = os.environ["inputLocation"]
else:
print("inputLocation not found in environment")
sys.exit(1)
if "outputLocation" in os.environ:
output_uri = os.environ["outputLocation"]
else:
print("outputLocation not found in environment")
sys.exit(1)
if "token" in os.environ:
token = os.environ["token"]
else:
print("token not found in environment")
sys.exit(1)
if "glue_job_name" in os.environ:
glue_job_name = os.environ["glue_job_name"]
else:
print("glue_job_name not found in environment")
sys.exit(1)
print(f"Processing from {input_uri} to {output_uri} using callback token {token}")
sagemaker = boto3.client("sagemaker")
glue = boto3.client("glue")
poll_interval = 60
try:
t1 = time.time()
response = glue.start_job_run(
JobName=glue_job_name, Arguments={"--output_uri": output_uri, "--input_uri": input_uri}
)
job_run_id = response["JobRunId"]
print(f"Starting job {job_run_id}")
job_status = "STARTING"
job_error = ""
while job_status in ["STARTING", "RUNNING", "STOPPING"]:
time.sleep(poll_interval)
response = glue.get_job_run(
JobName=glue_job_name, RunId=job_run_id, PredecessorsIncluded=False
)
job_status = response["JobRun"]["JobRunState"]
if "ErrorMessage" in response["JobRun"]:
job_error = response["JobRun"]["ErrorMessage"]
print(f"Job is in state {job_status}")
t2 = time.time()
total_time = (t2 - t1) / 60.0
if job_status == "SUCCEEDED":
print("Job succeeded")
sagemaker.send_pipeline_execution_step_success(
CallbackToken=token,
OutputParameters=[
{"Name": "minutes", "Value": str(total_time)},
{
"Name": "s3_data_out",
"Value": str(output_uri),
},
],
)
else:
print(f"Job failed: {job_error}")
sagemaker.send_pipeline_execution_step_failure(CallbackToken=token, FailureReason=job_error)
except Exception as e:
trc = traceback.format_exc()
print(f"Error running ETL job: {str(e)}:\m {trc}")
sagemaker.send_pipeline_execution_step_failure(CallbackToken=token, FailureReason=str(e))
###Output
_____no_output_____
###Markdown
Next, write the code for your Dockerfile...
###Code
%%writefile container/Dockerfile
#FROM ubuntu:18.04
FROM public.ecr.aws/ubuntu/ubuntu:latest
RUN apt-get -y update && apt-get install -y --no-install-recommends \
python3-pip \
python3-setuptools \
curl \
unzip
RUN /usr/bin/pip3 install boto3
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
COPY task.py /opt
CMD /usr/bin/python3 /opt/task.py
###Output
_____no_output_____
###Markdown
Finally, use the studio image build CLI to build and push your image to ECR
###Code
%%sh
cd container
sm-docker build . --repository ecs-fargate-task:latest
###Output
_____no_output_____
###Markdown
After building the image, you have to grab the ECR URI and define a local notebook variable that holds it in the last cell in this section.
###Code
import sagemaker as sage
sess = sage.Session()
account = sess.boto_session.client("sts").get_caller_identity()["Account"]
region = boto3.session.Session().region_name
task_uri = "{}.dkr.ecr.{}.amazonaws.com/ecs-fargate-task".format(account, region)
print("URI:", task_uri)
###Output
_____no_output_____
###Markdown
Set up ECS Fargate taskNow we will create and register the task using the roles we create above...
###Code
region = boto3.Session().region_name
response = ecs.register_task_definition(
family="FargateTaskRunner",
taskRoleArn=task_role_arn,
executionRoleArn=ecs_role_arn,
networkMode="awsvpc",
containerDefinitions=[
{
"name": "FargateTask",
"image": task_uri,
"cpu": 512,
"memory": 1024,
"essential": True,
"environment": [
{"name": "inputLocation", "value": "temp"},
{"name": "outputLocation", "value": "temp"},
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-group": "glue_sg_pipeline",
"awslogs-region": region,
"awslogs-stream-prefix": "task",
},
},
},
],
requiresCompatibilities=[
"FARGATE",
],
cpu="512",
memory="1024",
)
print(f"Task definition ARN: {response['taskDefinition']['taskDefinitionArn']}")
task_arn = response["taskDefinition"]["taskDefinitionArn"]
###Output
_____no_output_____
###Markdown
Copy data to our bucketNext, we'll copy the 2020 taxi data to the sagemaker session default bucket breaking up the data per month.
###Code
s3 = boto3.client("s3")
taxi_bucket = "nyc-tlc"
taxi_prefix = "taxi"
for month in ["01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12"]:
copy_source = {"Bucket": taxi_bucket, "Key": f"trip data/yellow_tripdata_2020-{month}.csv"}
s3.copy(copy_source, default_bucket, f"{taxi_prefix}/yellow_tripdata_2020-{month}.csv")
default_bucket
###Output
_____no_output_____
###Markdown
Create SQS queue for pipelineIn this step, we'll create the SQS queue that will be used by the CallbackStep inside SageMaker Pipeline steps. SageMaker Pipelines will put a token to this queue that will serve as a trigger for your Lambda function which will initiate the Fargate task to process your data.
###Code
sqs_client = boto3.client("sqs")
queue_url = ""
queue_name = "pipeline_callbacks_glue_prep"
try:
response = sqs_client.create_queue(QueueName=queue_name)
except:
print(f"Failed to create queue")
###Output
_____no_output_____
###Markdown
Format the queue URL to the same format we will need later on.
###Code
queue_url = f"https://sqs.{region}.amazonaws.com/{account}/{queue_name}"
queue_url
###Output
_____no_output_____
###Markdown
VPC and security settingsFor this setup, we'll use the default VPC and all of its subnets for the fargate task. However, we'll create a new security group for the tasks that allows egress but no ingress.
###Code
ec2 = boto3.client("ec2")
response = ec2.describe_vpcs(Filters=[{"Name": "isDefault", "Values": ["true"]}])
default_vpc_id = response["Vpcs"][0]["VpcId"]
response = ec2.describe_subnets(Filters=[{"Name": "vpc-id", "Values": [default_vpc_id]}])
task_subnets = []
for r in response["Subnets"]:
task_subnets.append(r["SubnetId"])
response = ec2.create_security_group(
Description="Security group for Fargate tasks", GroupName="fg_task_sg", VpcId=default_vpc_id
)
sg_id = response["GroupId"]
response = ec2.authorize_security_group_ingress(
GroupId=sg_id,
IpPermissions=[
{
"FromPort": 0,
"IpProtocol": "-1",
"UserIdGroupPairs": [
{"GroupId": sg_id, "Description": "local SG ingress"},
],
"ToPort": 65535,
},
],
)
###Output
_____no_output_____
###Markdown
Create ETL scriptThe ETL job will take two arguments, the location of the input data in S3 and the output path in S3.
###Code
%%writefile etl.py
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from pyspark.sql.types import IntegerType
from pyspark.sql import functions as F
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ["JOB_NAME", "input_uri", "output_uri"])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args["JOB_NAME"], args)
df = spark.read.format("csv").option("header", "true").load("{0}*.csv".format(args["input_uri"]))
df = df.withColumn("Passengers", df["passenger_count"].cast(IntegerType()))
df = df.withColumn(
"pickup_time",
F.to_timestamp(
F.unix_timestamp("tpep_pickup_datetime", "yyyy-MM-dd HH:mm:ss").cast("timestamp")
),
)
dfW = df.groupBy(F.window("pickup_time", "30 minutes")).agg(F.sum("Passengers").alias("passenger"))
dfOut = dfW.drop("window")
dfOut.repartition(1).write.option("timestampFormat", "yyyy-MM-dd HH:mm:ss").csv(args["output_uri"])
job.commit()
s3.upload_file("etl.py", default_bucket, "pipeline/etl.py")
glue_script_location = f"s3://{default_bucket}/pipeline/etl.py"
glue_script_location
###Output
_____no_output_____
###Markdown
Create ETL jobNext, we'll create the glue job using the script and roles creates in the prevous steps...
###Code
glue = boto3.client("glue")
response = glue.create_job(
Name="GlueDataPrepForPipeline",
Description="Prepare data for SageMaker training",
Role=glue_role_arn,
ExecutionProperty={"MaxConcurrentRuns": 1},
Command={
"Name": "glueetl",
"ScriptLocation": glue_script_location,
},
MaxRetries=0,
Timeout=60,
MaxCapacity=10.0,
GlueVersion="2.0",
)
glue_job_name = response["Name"]
glue_job_name
###Output
_____no_output_____
###Markdown
Create Lambda functionThe Lambda function will be triggered on new messages to the SQS queue create by the CallbackStep in SageMaker Pipelines. The Lambda function is responsible for initiating the run of your Fargate task. Now, write the code that will be used in the Lambda function..
###Code
%%writefile queue_handler.py
import json
import boto3
import os
import traceback
ecs = boto3.client("ecs")
sagemaker = boto3.client("sagemaker")
def handler(event, context):
print(f"Got event: {json.dumps(event)}")
cluster_arn = os.environ["cluster_arn"]
task_arn = os.environ["task_arn"]
task_subnets = os.environ["task_subnets"]
task_sgs = os.environ["task_sgs"]
glue_job_name = os.environ["glue_job_name"]
print(f"Cluster ARN: {cluster_arn}")
print(f"Task ARN: {task_arn}")
print(f"Task Subnets: {task_subnets}")
print(f"Task SG: {task_sgs}")
print(f"Glue job name: {glue_job_name}")
for record in event["Records"]:
payload = json.loads(record["body"])
print(f"Processing record {payload}")
token = payload["token"]
print(f"Got token {token}")
try:
input_data_s3_uri = payload["arguments"]["input_location"]
output_data_s3_uri = payload["arguments"]["output_location"]
print(f"Got input_data_s3_uri {input_data_s3_uri}")
print(f"Got output_data_s3_uri {output_data_s3_uri}")
response = ecs.run_task(
cluster=cluster_arn,
count=1,
launchType="FARGATE",
taskDefinition=task_arn,
networkConfiguration={
"awsvpcConfiguration": {
"subnets": task_subnets.split(","),
"securityGroups": task_sgs.split(","),
"assignPublicIp": "ENABLED",
}
},
overrides={
"containerOverrides": [
{
"name": "FargateTask",
"environment": [
{"name": "inputLocation", "value": input_data_s3_uri},
{"name": "outputLocation", "value": output_data_s3_uri},
{"name": "token", "value": token},
{"name": "glue_job_name", "value": glue_job_name},
],
}
]
},
)
if "failures" in response and len(response["failures"]) > 0:
f = response["failures"][0]
print(f"Failed to launch task for token {token}: {f['reason']}")
sagemaker.send_step_failure(CallbackToken=token, FailureReason=f["reason"])
else:
print(f"Launched task {response['tasks'][0]['taskArn']}")
except Exception as e:
trc = traceback.format_exc()
print(f"Error handling record: {str(e)}:\m {trc}")
sagemaker.send_step_failure(CallbackToken=token, FailureReason=e)
###Output
_____no_output_____
###Markdown
Finally, bundle the code and upload it to S3 then create the Lambda function...
###Code
import zipfile
archive = zipfile.ZipFile("queue_handler.zip", "w")
archive.write("queue_handler.py")
s3 = boto3.client("s3")
s3.upload_file("queue_handler.zip", default_bucket, "pipeline/queue_handler.zip")
lambda_client = boto3.client("lambda")
lambda_client.create_function(
Code={
"S3Bucket": default_bucket,
"S3Key": "pipeline/queue_handler.zip",
},
FunctionName="SMPipelineQueueHandler",
Description="Process Glue callback messages from SageMaker Pipelines",
Handler="queue_handler.handler",
Publish=True,
Role=lambda_role_arn,
Runtime="python3.7",
Timeout=20,
MemorySize=128,
PackageType="Zip",
Environment={
"Variables": {
"cluster_arn": cluster_arn,
"task_arn": task_arn,
"task_subnets": ",".join(task_subnets),
"task_sgs": sg_id,
"glue_job_name": glue_job_name,
}
},
)
###Output
_____no_output_____
###Markdown
Set up Lambda as SQS targetNext, we'll attach the lambda function created above to the SQS queue we previously created. This ensures that your Lambda will be triggered when new messages are put to your SQS queue.
###Code
lambda_client.create_event_source_mapping(
EventSourceArn=f"arn:aws:sqs:{region}:{account}:{queue_name}",
FunctionName="SMPipelineQueueHandler",
Enabled=True,
BatchSize=10,
)
###Output
_____no_output_____
###Markdown
Build & Execute SageMaker PipelineNow that all of the components are created and configured that support the tasks within your pipeline steps, we're now ready to bring it all together and setup the pipeline. First, install the SageMaker Python SDK.
###Code
!pip install "sagemaker==2.91.1"
###Output
_____no_output_____
###Markdown
Pipeline Initialization
###Code
import time
timestamp = int(time.time())
from sagemaker.workflow.parameters import (
ParameterInteger,
ParameterString,
)
input_data = ParameterString(
name="InputData", default_value=f"s3://{default_bucket}/{taxi_prefix}/"
)
id_out = ParameterString(name="IdOut", default_value="taxiout" + str(timestamp))
output_data = ParameterString(
name="OutputData", default_value=f"s3://{default_bucket}/{taxi_prefix}_output/"
)
training_instance_count = ParameterInteger(name="TrainingInstanceCount", default_value=1)
###Output
_____no_output_____
###Markdown
Pipeline Steps 1 - Call Back Step First, we'll configure the callback step. The callback step will accept the following **inputs**: * S3 location of our raw taxi data * SQS queue The callback step will return the following **outputs**: * S3 location of processed data to be used for model training
###Code
from sagemaker.workflow.callback_step import CallbackStep, CallbackOutput, CallbackOutputTypeEnum
callback1_output = CallbackOutput(
output_name="s3_data_out", output_type=CallbackOutputTypeEnum.String
)
step_callback_data = CallbackStep(
name="GluePrepCallbackStep",
sqs_queue_url=queue_url,
inputs={
"input_location": f"s3://{default_bucket}/{taxi_prefix}/",
"output_location": f"s3://{default_bucket}/{taxi_prefix}_{id_out}/",
},
outputs=[callback1_output],
)
###Output
_____no_output_____
###Markdown
2 - Training Step Next, we'll configure the training step by first configuring the estimator for random cut forest. Then, we'll configure the training step. The training step will accept the following **inputs**: * S3 location of processed data to be used for model training * ECR containing the training image for rcf * Estimator configuration The training step will return the following **outputs**: * S3 location of the trained model artifact
###Code
containers = {
"us-west-2": "174872318107.dkr.ecr.us-west-2.amazonaws.com/randomcutforest:latest",
"us-east-1": "382416733822.dkr.ecr.us-east-1.amazonaws.com/randomcutforest:latest",
"us-east-2": "404615174143.dkr.ecr.us-east-2.amazonaws.com/randomcutforest:latest",
"eu-west-1": "438346466558.dkr.ecr.eu-west-1.amazonaws.com/randomcutforest:latest",
}
region_name = boto3.Session().region_name
container = containers[region_name]
model_prefix = "model"
session = sagemaker.Session()
rcf = sagemaker.estimator.Estimator(
container,
sagemaker.get_execution_role(),
output_path="s3://{}/{}/output".format(default_bucket, model_prefix),
instance_count=training_instance_count,
instance_type="ml.c5.xlarge",
sagemaker_session=session,
)
rcf.set_hyperparameters(num_samples_per_tree=200, num_trees=50, feature_dim=1)
from sagemaker.inputs import TrainingInput
from sagemaker.workflow.steps import TrainingStep
step_train = TrainingStep(
name="TrainModel",
estimator=rcf,
inputs={
"train": TrainingInput(
# s3_data = Output of the previous call back step
s3_data=step_callback_data.properties.Outputs["s3_data_out"],
content_type="text/csv;label_size=0",
distribution="ShardedByS3Key",
),
},
)
###Output
_____no_output_____
###Markdown
3 - Create ModelNext, we'll package the trained model for deployment. The create model step will accept the following **inputs**: * S3 location of the trained model artifact * ECR containing the inference image for rcf The create model step will return the following **outputs**: * SageMaker packaged model
###Code
from sagemaker.model import Model
from sagemaker import get_execution_role
role = get_execution_role()
image_uri = sagemaker.image_uris.retrieve("randomcutforest", region)
model = Model(
image_uri=image_uri,
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
sagemaker_session=sagemaker_session,
role=role,
)
from sagemaker.inputs import CreateModelInput
from sagemaker.workflow.steps import CreateModelStep
inputs = CreateModelInput(
instance_type="ml.m5.large",
)
create_model = CreateModelStep(
name="TaxiModel",
model=model,
inputs=inputs,
)
###Output
_____no_output_____
###Markdown
4 - Batch TransformNext, we'll deploy the model using batch transform then do a quick evaluation with our data to compute anomaly scores for each of our data points on input. The batch transform step will accept the following **inputs**: * SageMaker packaged model * S3 location of the input data * ECR containing the inference image for rcf The batch transform step will return the following **outputs**: * S3 location of the output data (anomaly scores)
###Code
base_uri = step_callback_data.properties.Outputs["s3_data_out"]
output_prefix = "batch-out"
from sagemaker.transformer import Transformer
transformer = Transformer(
model_name=create_model.properties.ModelName,
instance_type="ml.m5.xlarge",
assemble_with="Line",
accept="text/csv",
instance_count=1,
output_path=f"s3://{default_bucket}/{output_prefix}/",
)
from sagemaker.inputs import TransformInput
from sagemaker.workflow.steps import TransformStep
batch_data = step_callback_data.properties.Outputs["s3_data_out"]
step_transform = TransformStep(
name="TaxiTransform",
transformer=transformer,
inputs=TransformInput(
data=batch_data,
content_type="text/csv",
split_type="Line",
input_filter="$[0]",
join_source="Input",
output_filter="$[0,-1]",
),
)
###Output
_____no_output_____
###Markdown
Configure Pipeline Using Created Steps
###Code
import uuid
id_out = uuid.uuid4().hex
print("Unique ID:", id_out)
from sagemaker.workflow.pipeline import Pipeline
pipeline_name = f"GluePipeline-{id_out}"
pipeline = Pipeline(
name=pipeline_name,
parameters=[
input_data,
training_instance_count,
id_out,
],
steps=[step_callback_data, step_train, create_model, step_transform],
)
from sagemaker import get_execution_role
pipeline.upsert(role_arn=get_execution_role())
import json
definition = json.loads(pipeline.definition())
definition
###Output
_____no_output_____
###Markdown
Execute Pipeline
###Code
execution = pipeline.start()
execution.describe()
execution.list_steps()
###Output
_____no_output_____
###Markdown
Glue ETL as part of a SageMaker pipelineThis notebook will show how to use the [Callback Step](https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-steps.htmlstep-type-callback) to extend your SageMaker Pipeline steps to include tasks performed by other AWS services or custom integrations. For this notebook, you'll learn how to include a Glue ETL job as part of a SageMaker ML pipeline. The overall flow will be:* Define Glue ETL job* Run Spark data preparation job in Glue* Run ML training job on SageMaker* Evaluate ML model performance The pipeline sends a message to an SQS queue. A Lambda function responds to SQS and invokes an ECS Fargate task. The task will handle running the Spark job and monitoring for progress. It'll then send the callback token back to the pipeline.![CustomStepPipeline](./images/pipelinescustom.png) Data setWe'll use the Yellow Taxi records from [NYC](https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page) in 2020. In this [blog](https://aws.amazon.com/blogs/machine-learning/use-the-built-in-amazon-sagemaker-random-cut-forest-algorithm-for-anomaly-detection/), we used a prepared version of the data that had passenger counts per half hour. In this notebook we'll take the raw NYC data and prepare the half-hour totals. One-time setupThis notebook needs permissions to:* Create Lambda functions* Create an ECS cluster* Upload images to ECR* Create IAM roles* Invoke SageMaker API for pipelines* Create security groups* Write data into S3* Create security groups* Describe VPC informationIn a production setting, we would deploy a lot of these resources using an infrastructure-as-code tool like CloudFormation or the CDK. But for simplicity in this demo we'll create everything in this notebook. Setup prerequisite IAM roles First we need to create the following IAM roles:* A role for the ECS Fargate task and task runner. Besides the usual policies that allow pulling images and creating logs, the task needs permission to start and monitor a Glue job, and send the callback token to SageMaker. Because the specific SageMaker action isn't visible in IAM yet, for now we give the task full SageMaker permissions.* A role for Glue with permissions to read and write from our S3 bucket.* A role for Lambda with permissions to run an ECS task, send the failure callback if something goes wrong, and poll SQS.For your convenience, we have prepared the setup_iam_roles.py script to help create the IAM roles and respective policies. In most cases, this script will be run by administrator teams, on behalf of data scientists.
###Code
import sagemaker
from setup_iam_roles import create_glue_pipeline_role
from setup_iam_roles import create_lambda_sm_pipeline_role
from setup_iam_roles import create_ecs_task_role, create_task_runner_role
sagemaker_session = sagemaker.session.Session()
default_bucket = sagemaker_session.default_bucket()
ecs_role_arn = create_ecs_task_role(role_name='fg_task_pipeline_role')
task_role_arn = create_task_runner_role(role_name='fg_task_runner_pipeline_role')
glue_role_arn = create_glue_pipeline_role(role_name='glue_pipeline_role', bucket=default_bucket)
lambda_role_arn = create_lambda_sm_pipeline_role(
role_name='lambda_sm_pipeline_role',
ecs_role_arn=ecs_role_arn,
task_role_arn=task_role_arn
)
###Output
_____no_output_____
###Markdown
ProcessingSetup the configurations & tasks that will be used to process data in the pipeline. Set up ECS Fargate clusterThe ECS Fargate cluster will be used to execute a Fargate task that will handle running the Spark data pre-processing in Glue and monitoring for progress. This task is invoked by a Lambda function that gets called whenever the CallbackStep puts a message to SQS.**Pipeline Step Tasks:** *CallbackStep -> SQS -> Lambda -> Fargate Task -> Glue Job*
###Code
import boto3
ecs = boto3.client('ecs')
response = ecs.create_cluster(
clusterName='FargateTaskRunner'
)
print(f"Cluster Name: {response['cluster']['clusterName']}")
print(f"Cluster ARN: {response['cluster']['clusterArn']}")
print(f"Cluster Status: {response['cluster']['status']}")
cluster_arn = response['cluster']['clusterArn']
###Output
_____no_output_____
###Markdown
Build container image for Fargate taskFirst, install the Amazon SageMaker Studio Build CLI convenience package that allows you to build docker images from your Studio environment. Please ensure you have the pre-requisites in place as outlined in this [blog](https://aws.amazon.com/blogs/machine-learning/using-the-amazon-sagemaker-studio-image-build-cli-to-build-container-images-from-your-studio-notebooks/).
###Code
import sys
!{sys.executable} -m pip install sagemaker_studio_image_build
###Output
_____no_output_____
###Markdown
Next, write the code to your local environment that will be used to build the docker image. **task.py:** This code will be used by the task runner to start and monitor the Glue job then report status back to SageMaker Pipelines via *send_pipeline_execution_step_success* or *send_pipeline_execution_step_failure*
###Code
!mkdir container
%%writefile container/task.py
import boto3
import os
import sys
import traceback
import time
if 'inputLocation' in os.environ:
input_uri = os.environ['inputLocation']
else:
print("inputLocation not found in environment")
sys.exit(1)
if 'outputLocation' in os.environ:
output_uri = os.environ['outputLocation']
else:
print("outputLocation not found in environment")
sys.exit(1)
if 'token' in os.environ:
token = os.environ['token']
else:
print("token not found in environment")
sys.exit(1)
if 'glue_job_name' in os.environ:
glue_job_name = os.environ['glue_job_name']
else:
print("glue_job_name not found in environment")
sys.exit(1)
print(f"Processing from {input_uri} to {output_uri} using callback token {token}")
sagemaker = boto3.client('sagemaker')
glue = boto3.client('glue')
poll_interval = 60
try:
t1 = time.time()
response = glue.start_job_run(
JobName=glue_job_name,
Arguments={
'--output_uri': output_uri,
'--input_uri': input_uri
}
)
job_run_id = response['JobRunId']
print(f"Starting job {job_run_id}")
job_status = 'STARTING'
job_error = ''
while job_status in ['STARTING','RUNNING','STOPPING']:
time.sleep(poll_interval)
response = glue.get_job_run(
JobName=glue_job_name,
RunId=job_run_id,
PredecessorsIncluded=False
)
job_status = response['JobRun']['JobRunState']
if 'ErrorMessage' in response['JobRun']:
job_error = response['JobRun']['ErrorMessage']
print(f"Job is in state {job_status}")
t2 = time.time()
total_time = (t2 - t1) / 60.0
if job_status == 'SUCCEEDED':
print("Job succeeded")
sagemaker.send_pipeline_execution_step_success(
CallbackToken=token,
OutputParameters=[
{
'Name': 'minutes',
'Value': str(total_time)
},
{
'Name': 's3_data_out',
'Value': str(output_uri),
}
]
)
else:
print(f"Job failed: {job_error}")
sagemaker.send_pipeline_execution_step_failure(
CallbackToken=token,
FailureReason = job_error
)
except Exception as e:
trc = traceback.format_exc()
print(f"Error running ETL job: {str(e)}:\m {trc}")
sagemaker.send_pipeline_execution_step_failure(
CallbackToken=token,
FailureReason = str(e)
)
###Output
_____no_output_____
###Markdown
Next, write the code for your Dockerfile...
###Code
%%writefile container/Dockerfile
#FROM ubuntu:18.04
FROM public.ecr.aws/ubuntu/ubuntu:latest
RUN apt-get -y update && apt-get install -y --no-install-recommends \
python3-pip \
python3-setuptools \
curl \
unzip
RUN /usr/bin/pip3 install boto3
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
COPY task.py /opt
CMD /usr/bin/python3 /opt/task.py
###Output
_____no_output_____
###Markdown
Finally, use the studio image build CLI to build and push your image to ECR
###Code
%%sh
cd container
sm-docker build . --repository ecs-fargate-task:latest
###Output
_____no_output_____
###Markdown
After building the image, you have to grab the ECR URI and define a local notebook variable that holds it in the last cell in this section.
###Code
import sagemaker as sage
sess = sage.Session()
account = sess.boto_session.client("sts").get_caller_identity()["Account"]
region = boto3.session.Session().region_name
task_uri= "{}.dkr.ecr.{}.amazonaws.com/ecs-fargate-task".format(account, region)
print("URI:", task_uri)
###Output
_____no_output_____
###Markdown
Set up ECS Fargate taskNow we will create and register the task using the roles we create above...
###Code
region = boto3.Session().region_name
response = ecs.register_task_definition(
family='FargateTaskRunner',
taskRoleArn=task_role_arn,
executionRoleArn=ecs_role_arn,
networkMode='awsvpc',
containerDefinitions=[
{
'name': 'FargateTask',
'image': task_uri,
'cpu': 512,
'memory': 1024,
'essential': True,
'environment': [
{
'name': 'inputLocation',
'value': 'temp'
},
{
'name': 'outputLocation',
'value': 'temp'
}
],
'logConfiguration': {
'logDriver': 'awslogs',
'options': {
'awslogs-create-group': 'true',
'awslogs-group': 'glue_sg_pipeline',
'awslogs-region': region,
'awslogs-stream-prefix': 'task'
},
},
},
],
requiresCompatibilities=[
'FARGATE',
],
cpu='512',
memory='1024'
)
print(f"Task definition ARN: {response['taskDefinition']['taskDefinitionArn']}")
task_arn = response['taskDefinition']['taskDefinitionArn']
###Output
_____no_output_____
###Markdown
Copy data to our bucketNext, we'll copy the 2020 taxi data to the sagemaker session default bucket breaking up the data per month.
###Code
s3 = boto3.client('s3')
taxi_bucket = 'nyc-tlc'
taxi_prefix = 'taxi'
for month in ['01','02','03','04','05','06','07','08','09','10','11','12']:
copy_source = {
'Bucket': taxi_bucket,
'Key': f"trip data/yellow_tripdata_2020-{month}.csv"
}
s3.copy(copy_source, default_bucket, f"{taxi_prefix}/yellow_tripdata_2020-{month}.csv")
default_bucket
###Output
_____no_output_____
###Markdown
Create SQS queue for pipelineIn this step, we'll create the SQS queue that will be used by the CallbackStep inside SageMaker Pipeline steps. SageMaker Pipelines will put a token to this queue that will serve as a trigger for your Lambda function which will initiate the Fargate task to process your data.
###Code
sqs_client = boto3.client('sqs')
queue_url = ''
queue_name = 'pipeline_callbacks_glue_prep'
try:
response = sqs_client.create_queue(QueueName=queue_name)
except:
print(f"Failed to create queue")
###Output
_____no_output_____
###Markdown
Format the queue URL to the same format we will need later on.
###Code
queue_url = f"https://sqs.{region}.amazonaws.com/{account}/{queue_name}"
queue_url
###Output
_____no_output_____
###Markdown
VPC and security settingsFor this setup, we'll use the default VPC and all of its subnets for the fargate task. However, we'll create a new security group for the tasks that allows egress but no ingress.
###Code
ec2 = boto3.client('ec2')
response = ec2.describe_vpcs(
Filters=[
{
'Name': 'isDefault',
'Values': [
'true'
]
}
]
)
default_vpc_id = response['Vpcs'][0]['VpcId']
response = ec2.describe_subnets(
Filters=[
{
'Name': 'vpc-id',
'Values': [
default_vpc_id
]
}
]
)
task_subnets = []
for r in response['Subnets']:
task_subnets.append(r['SubnetId'])
response = ec2.create_security_group(
Description='Security group for Fargate tasks',
GroupName='fg_task_sg',
VpcId=default_vpc_id
)
sg_id = response['GroupId']
response = ec2.authorize_security_group_ingress(
GroupId=sg_id,
IpPermissions=[
{
'FromPort': 0,
'IpProtocol': '-1',
'UserIdGroupPairs': [
{
'GroupId': sg_id,
'Description': 'local SG ingress'
},
],
'ToPort': 65535
},
]
)
###Output
_____no_output_____
###Markdown
Create ETL scriptThe ETL job will take two arguments, the location of the input data in S3 and the output path in S3.
###Code
%%writefile etl.py
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from pyspark.sql.types import IntegerType
from pyspark.sql import functions as F
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME', 'input_uri', 'output_uri'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
df = spark.read.format("csv").option("header", "true").load("{0}*.csv".format(args['input_uri']))
df = df.withColumn("Passengers", df["passenger_count"].cast(IntegerType()))
df = df.withColumn(
'pickup_time',
F.to_timestamp(
F.unix_timestamp('tpep_pickup_datetime', 'yyyy-MM-dd HH:mm:ss').cast('timestamp')))
dfW = df.groupBy(F.window("pickup_time", "30 minutes")).agg(F.sum("Passengers").alias("passenger"))
dfOut = dfW.drop('window')
dfOut.repartition(1).write.option("timestampFormat", "yyyy-MM-dd HH:mm:ss").csv(args['output_uri'])
job.commit()
s3.upload_file('etl.py', default_bucket, 'pipeline/etl.py')
glue_script_location = f"s3://{default_bucket}/pipeline/etl.py"
glue_script_location
###Output
_____no_output_____
###Markdown
Create ETL jobNext, we'll create the glue job using the script and roles creates in the prevous steps...
###Code
glue = boto3.client('glue')
response = glue.create_job(
Name='GlueDataPrepForPipeline',
Description='Prepare data for SageMaker training',
Role=glue_role_arn,
ExecutionProperty={
'MaxConcurrentRuns': 1
},
Command={
'Name': 'glueetl',
'ScriptLocation': glue_script_location,
},
MaxRetries=0,
Timeout=60,
MaxCapacity=10.0,
GlueVersion='2.0'
)
glue_job_name = response['Name']
glue_job_name
###Output
_____no_output_____
###Markdown
Create Lambda functionThe Lambda function will be triggered on new messages to the SQS queue create by the CallbackStep in SageMaker Pipelines. The Lambda function is responsible for initiating the run of your Fargate task. Now, write the code that will be used in the Lambda function..
###Code
%%writefile queue_handler.py
import json
import boto3
import os
import traceback
ecs = boto3.client('ecs')
sagemaker = boto3.client('sagemaker')
def handler(event, context):
print(f"Got event: {json.dumps(event)}")
cluster_arn = os.environ["cluster_arn"]
task_arn = os.environ["task_arn"]
task_subnets = os.environ["task_subnets"]
task_sgs = os.environ["task_sgs"]
glue_job_name = os.environ["glue_job_name"]
print(f"Cluster ARN: {cluster_arn}")
print(f"Task ARN: {task_arn}")
print(f"Task Subnets: {task_subnets}")
print(f"Task SG: {task_sgs}")
print(f"Glue job name: {glue_job_name}")
for record in event['Records']:
payload = json.loads(record["body"])
print(f"Processing record {payload}")
token = payload["token"]
print(f"Got token {token}")
try:
input_data_s3_uri = payload["arguments"]["input_location"]
output_data_s3_uri = payload["arguments"]["output_location"]
print(f"Got input_data_s3_uri {input_data_s3_uri}")
print(f"Got output_data_s3_uri {output_data_s3_uri}")
response = ecs.run_task(
cluster = cluster_arn,
count=1,
launchType='FARGATE',
taskDefinition=task_arn,
networkConfiguration={
'awsvpcConfiguration': {
'subnets': task_subnets.split(','),
'securityGroups': task_sgs.split(','),
'assignPublicIp': 'ENABLED'
}
},
overrides={
'containerOverrides': [
{
'name': 'FargateTask',
'environment': [
{
'name': 'inputLocation',
'value': input_data_s3_uri
},
{
'name': 'outputLocation',
'value': output_data_s3_uri
},
{
'name': 'token',
'value': token
},
{
'name': 'glue_job_name',
'value': glue_job_name
}
]
}
]
}
)
if 'failures' in response and len(response['failures']) > 0:
f = response['failures'][0]
print(f"Failed to launch task for token {token}: {f['reason']}")
sagemaker.send_step_failure(
CallbackToken=token,
FailureReason = f['reason']
)
else:
print(f"Launched task {response['tasks'][0]['taskArn']}")
except Exception as e:
trc = traceback.format_exc()
print(f"Error handling record: {str(e)}:\m {trc}")
sagemaker.send_step_failure(
CallbackToken=token,
FailureReason = e
)
###Output
_____no_output_____
###Markdown
Finally, bundle the code and upload it to S3 then create the Lambda function...
###Code
import zipfile
archive = zipfile.ZipFile('queue_handler.zip', 'w')
archive.write('queue_handler.py')
s3 = boto3.client('s3')
s3.upload_file('queue_handler.zip', default_bucket, 'pipeline/queue_handler.zip')
lambda_client = boto3.client('lambda')
lambda_client.create_function(
Code={
'S3Bucket': default_bucket,
'S3Key': 'pipeline/queue_handler.zip',
},
FunctionName='SMPipelineQueueHandler',
Description='Process Glue callback messages from SageMaker Pipelines',
Handler='queue_handler.handler',
Publish=True,
Role=lambda_role_arn,
Runtime='python3.7',
Timeout=20,
MemorySize=128,
PackageType='Zip',
Environment= {
'Variables': {
'cluster_arn': cluster_arn,
'task_arn': task_arn,
'task_subnets': ",".join(task_subnets),
'task_sgs': sg_id,
'glue_job_name': glue_job_name
}
}
)
###Output
_____no_output_____
###Markdown
Set up Lambda as SQS targetNext, we'll attach the lambda function created above to the SQS queue we previously created. This ensures that your Lambda will be triggered when new messages are put to your SQS queue.
###Code
lambda_client.create_event_source_mapping(
EventSourceArn=f'arn:aws:sqs:{region}:{account}:{queue_name}',
FunctionName='SMPipelineQueueHandler',
Enabled=True,
BatchSize=10
)
###Output
_____no_output_____
###Markdown
Build & Execute SageMaker PipelineNow that all of the components are created and configured that support the tasks within your pipeline steps, we're now ready to bring it all together and setup the pipeline. First, make sure you have the latest version of the SageMaker Python SDK that includes the *Callback* step
###Code
!pip install sagemaker -U
###Output
_____no_output_____
###Markdown
Pipeline Initialization
###Code
import time
timestamp = int(time.time())
from sagemaker.workflow.parameters import (
ParameterInteger,
ParameterString,
)
input_data = ParameterString(
name="InputData",
default_value=f"s3://{default_bucket}/{taxi_prefix}/"
)
id_out = ParameterString(
name="IdOut",
default_value="taxiout"+ str(timestamp)
)
output_data = ParameterString(
name="OutputData",
default_value=f"s3://{default_bucket}/{taxi_prefix}_output/"
)
training_instance_count = ParameterInteger(
name="TrainingInstanceCount",
default_value=1
)
training_instance_type = ParameterString(
name="TrainingInstanceType",
default_value="ml.c5.xlarge"
)
###Output
_____no_output_____
###Markdown
Pipeline Steps 1 - Call Back Step First, we'll configure the callback step. The callback step will accept the following **inputs**: * S3 location of our raw taxi data * SQS queue The callback step will return the following **outputs**: * S3 location of processed data to be used for model training
###Code
from sagemaker.workflow.callback_step import CallbackStep,CallbackOutput,CallbackOutputTypeEnum
callback1_output=CallbackOutput(output_name="s3_data_out", output_type=CallbackOutputTypeEnum.String)
step_callback_data = CallbackStep(
name="GluePrepCallbackStep",
sqs_queue_url=queue_url,
inputs={
"input_location": f"s3://{default_bucket}/{taxi_prefix}/",
"output_location": f"s3://{default_bucket}/{taxi_prefix}_{id_out}/"
},
outputs=[
callback1_output
],
)
###Output
_____no_output_____
###Markdown
2 - Training Step Next, we'll configure the training step by first configuring the estimator for random cut forest. Then, we'll configure the training step. The training step will accept the following **inputs**: * S3 location of processed data to be used for model training * ECR containing the training image for rcf * Estimator configuration The training step will return the following **outputs**: * S3 location of the trained model artifact
###Code
containers = {
'us-west-2': '174872318107.dkr.ecr.us-west-2.amazonaws.com/randomcutforest:latest',
'us-east-1': '382416733822.dkr.ecr.us-east-1.amazonaws.com/randomcutforest:latest',
'us-east-2': '404615174143.dkr.ecr.us-east-2.amazonaws.com/randomcutforest:latest',
'eu-west-1': '438346466558.dkr.ecr.eu-west-1.amazonaws.com/randomcutforest:latest'}
region_name = boto3.Session().region_name
container = containers[region_name]
model_prefix = 'model'
session = sagemaker.Session()
rcf = sagemaker.estimator.Estimator(
container,
sagemaker.get_execution_role(),
output_path='s3://{}/{}/output'.format(default_bucket, model_prefix),
instance_count=training_instance_count,
instance_type=training_instance_type,
sagemaker_session=session)
rcf.set_hyperparameters(
num_samples_per_tree=200,
num_trees=50,
feature_dim=1)
from sagemaker.inputs import TrainingInput
from sagemaker.workflow.steps import TrainingStep
step_train = TrainingStep(
name="TrainModel",
estimator=rcf,
inputs={
"train": TrainingInput(
# s3_data = Output of the previous call back step
s3_data=step_callback_data.properties.Outputs['s3_data_out'],
content_type="text/csv;label_size=0",
distribution='ShardedByS3Key'
),
},
)
###Output
_____no_output_____
###Markdown
3 - Create ModelNext, we'll package the trained model for deployment. The create model step will accept the following **inputs**: * S3 location of the trained model artifact * ECR containing the inference image for rcf The create model step will return the following **outputs**: * SageMaker packaged model
###Code
from sagemaker.model import Model
from sagemaker import get_execution_role
role = get_execution_role()
image_uri = sagemaker.image_uris.retrieve("randomcutforest", region)
model = Model(
image_uri=image_uri,
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
sagemaker_session=sagemaker_session,
role=role,
)
from sagemaker.inputs import CreateModelInput
from sagemaker.workflow.steps import CreateModelStep
inputs = CreateModelInput(
instance_type="ml.m5.large",
)
create_model = CreateModelStep(
name="TaxiModel",
model=model,
inputs=inputs,
)
###Output
_____no_output_____
###Markdown
4 - Batch TransformNext, we'll deploy the model using batch transform then do a quick evaluation with our data to compute anomaly scores for each of our data points on input. The batch transform step will accept the following **inputs**: * SageMaker packaged model * S3 location of the input data * ECR containing the inference image for rcf The batch transform step will return the following **outputs**: * S3 location of the output data (anomaly scores)
###Code
base_uri = step_callback_data.properties.Outputs['s3_data_out']
output_prefix = 'batch-out'
from sagemaker.transformer import Transformer
transformer = Transformer(
model_name=create_model.properties.ModelName,
instance_type="ml.m5.xlarge",
assemble_with = "Line",
accept = 'text/csv',
instance_count=1,
output_path=f"s3://{default_bucket}/{output_prefix}/",
)
from sagemaker.inputs import TransformInput
from sagemaker.workflow.steps import TransformStep
batch_data=step_callback_data.properties.Outputs['s3_data_out']
step_transform = TransformStep(
name="TaxiTransform",
transformer=transformer,
inputs=TransformInput(data=batch_data,content_type="text/csv",split_type="Line",input_filter="$[0]",join_source='Input',output_filter='$[0,-1]')
)
###Output
_____no_output_____
###Markdown
Configure Pipeline Using Created Steps
###Code
import uuid
id_out = uuid.uuid4().hex
print('Unique ID:', id_out)
from sagemaker.workflow.pipeline import Pipeline
pipeline_name = f"GluePipeline-{id_out}"
pipeline = Pipeline(
name=pipeline_name,
parameters=[
input_data,
training_instance_type,
training_instance_count,
id_out,
],
steps=[step_callback_data, step_train,create_model,step_transform],
)
from sagemaker import get_execution_role
pipeline.upsert(role_arn = get_execution_role())
import json
definition = json.loads(pipeline.definition())
definition
###Output
_____no_output_____
###Markdown
Execute Pipeline
###Code
execution = pipeline.start()
execution.describe()
execution.list_steps()
###Output
_____no_output_____
###Markdown
Glue ETL as part of a SageMaker pipelineThis notebook will show how to use the [Callback Step](https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-steps.htmlstep-type-callback) to extend your SageMaker Pipeline steps to include tasks performed by other AWS services or custom integrations. For this notebook, you'll learn how to include a Glue ETL job as part of a SageMaker ML pipeline. The overall flow will be:* Define Glue ETL job* Run Spark data preparation job in Glue* Run ML training job on SageMaker* Evaluate ML model performance The pipeline sends a message to an SQS queue. A Lambda function responds to SQS and invokes an ECS Fargate task. The task will handle running the Spark job and monitoring for progress. It'll then send the callback token back to the pipeline.![CustomStepPipeline](./images/pipelinescustom.png) Data setWe'll use the Yellow Taxi records from [NYC](https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page) in 2020. In this [blog](https://aws.amazon.com/blogs/machine-learning/use-the-built-in-amazon-sagemaker-random-cut-forest-algorithm-for-anomaly-detection/), we used a prepared version of the data that had passenger counts per half hour. In this notebook we'll take the raw NYC data and prepare the half-hour totals. One-time setupThis notebook needs permissions to:* Create Lambda functions* Create an ECS cluster* Upload images to ECR* Create IAM roles* Invoke SageMaker API for pipelines* Create security groups* Write data into S3* Create security groups* Describe VPC informationIn a production setting, we would deploy a lot of these resources using an infrastructure-as-code tool like CloudFormation or the CDK. But for simplicity in this demo we'll create everything in this notebook. ProcessingSetup the configurations & tasks that will be used to process data in the pipeline. Set up ECS Fargate clusterThe ECS Fargate cluster will be used to execute a Fargate task that will handle running the Spark data pre-processing in Glue and monitoring for progress. This task is invokved by a Lambda function that gets called whenever the CallbackStep puts a message to SQS.**Pipeline Step Tasks:** *CallbackStep -> SQS -> Lambda -> Fargate Task -> Glue Job*
###Code
import boto3
ecs = boto3.client('ecs')
response = ecs.create_cluster(
clusterName='FargateTaskRunner'
)
print(f"Cluster Name: {response['cluster']['clusterName']}")
print(f"Cluster ARN: {response['cluster']['clusterArn']}")
print(f"Cluster Status: {response['cluster']['status']}")
cluster_arn = response['cluster']['clusterArn']
###Output
_____no_output_____
###Markdown
Build container image for Fargate taskFirst, install the Amazon SageMaker Studio Build CLI convenience package that allows you to build docker images from your Studio environment. Please ensure you have the pre-requisites in place as outlined in this [blog](https://aws.amazon.com/blogs/machine-learning/using-the-amazon-sagemaker-studio-image-build-cli-to-build-container-images-from-your-studio-notebooks/).
###Code
import sys
!{sys.executable} -m pip install sagemaker_studio_image_build
###Output
_____no_output_____
###Markdown
Next, write the code to your local environment that will be used to build the docker image. **task.py:** This code will be used by the task runner to start and monitor the Glue job then report status back to SageMaker Pipelines via *send_pipeline_execution_step_success* or *send_pipeline_execution_step_failure*
###Code
!mkdir container
%%writefile container/task.py
import boto3
import os
import sys
import traceback
import time
if 'inputLocation' in os.environ:
input_uri = os.environ['inputLocation']
else:
print("inputLocation not found in environment")
sys.exit(1)
if 'outputLocation' in os.environ:
output_uri = os.environ['outputLocation']
else:
print("outputLocation not found in environment")
sys.exit(1)
if 'token' in os.environ:
token = os.environ['token']
else:
print("token not found in environment")
sys.exit(1)
if 'glue_job_name' in os.environ:
glue_job_name = os.environ['glue_job_name']
else:
print("glue_job_name not found in environment")
sys.exit(1)
print(f"Processing from {input_uri} to {output_uri} using callback token {token}")
sagemaker = boto3.client('sagemaker')
glue = boto3.client('glue')
poll_interval = 60
try:
t1 = time.time()
response = glue.start_job_run(
JobName=glue_job_name,
Arguments={
'--output_uri': output_uri,
'--input_uri': input_uri
}
)
job_run_id = response['JobRunId']
print(f"Starting job {job_run_id}")
job_status = 'STARTING'
job_error = ''
while job_status in ['STARTING','RUNNING','STOPPING']:
time.sleep(poll_interval)
response = glue.get_job_run(
JobName=glue_job_name,
RunId=job_run_id,
PredecessorsIncluded=False
)
job_status = response['JobRun']['JobRunState']
if 'ErrorMessage' in response['JobRun']:
job_error = response['JobRun']['ErrorMessage']
print(f"Job is in state {job_status}")
t2 = time.time()
total_time = (t2 - t1) / 60.0
if job_status == 'SUCCEEDED':
print("Job succeeded")
sagemaker.send_pipeline_execution_step_success(
CallbackToken=token,
OutputParameters=[
{
'Name': 'minutes',
'Value': str(total_time)
},
{
'Name': 's3_data_out',
'Value': str(output_uri),
}
]
)
else:
print(f"Job failed: {job_error}")
sagemaker.send_pipeline_execution_step_failure(
CallbackToken=token,
FailureReason = job_error
)
except Exception as e:
trc = traceback.format_exc()
print(f"Error running ETL job: {str(e)}:\m {trc}")
sagemaker.send_pipeline_execution_step_failure(
CallbackToken=token,
FailureReason = str(e)
)
###Output
_____no_output_____
###Markdown
Next, write the code for your Dockerfile...
###Code
%%writefile container/Dockerfile
#FROM ubuntu:18.04
FROM public.ecr.aws/ubuntu/ubuntu:latest
RUN apt-get -y update && apt-get install -y --no-install-recommends \
python3-pip \
python3-setuptools \
curl \
unzip
RUN /usr/bin/pip3 install boto3
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
COPY task.py /opt
CMD /usr/bin/python3 /opt/task.py
###Output
_____no_output_____
###Markdown
Finally, use the studio image build CLI to build and push your image to ECR
###Code
%%sh
cd container
sm-docker build . --repository ecs-fargate-task:latest
###Output
_____no_output_____
###Markdown
After building the image, you have to grab the ECR URI and define a local notebook variable that holds it in the last cell in this section.
###Code
import sagemaker as sage
sess = sage.Session()
account = sess.boto_session.client("sts").get_caller_identity()["Account"]
region = boto3.session.Session().region_name
task_uri= "{}.dkr.ecr.{}.amazonaws.com/ecs-fargate-task".format(account, region)
print("URI:", task_uri)
###Output
_____no_output_____
###Markdown
Set up ECS Fargate taskFirst we need to create IAM policies for the task. Besides the usual policies that allow pulling images and creating logs, the task needs permission to start and monitor a Glue job, and send the callback token to SageMaker. Because the specific SageMaker action isn't visible in IAM yet, for now we give the task full SageMaker permissions.
###Code
iam = boto3.client('iam')
response = iam.create_role(
RoleName='fg_task_pipeline_role',
AssumeRolePolicyDocument='''{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}''',
Description='Role for ECS task execution',
)
ecs_role_arn = response['Role']['Arn']
ecs_role_name = response['Role']['RoleName']
response = iam.attach_role_policy(
RoleName=ecs_role_name,
PolicyArn='arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
)
response = iam.create_role(
RoleName='fg_task_runner_pipeline_role',
AssumeRolePolicyDocument='''{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}''',
Description='Role for ECS tasks',
)
task_role_arn = response['Role']['Arn']
task_role_name = response['Role']['RoleName']
response = iam.attach_role_policy(
RoleName=task_role_name,
PolicyArn='arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
)
response = iam.put_role_policy(
RoleName=task_role_name,
PolicyName='create_log_group',
PolicyDocument='{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"logs:CreateLogGroup","Resource":"*"}}'
)
response = iam.put_role_policy(
RoleName=ecs_role_name,
PolicyName='create_log_group',
PolicyDocument='{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"logs:CreateLogGroup","Resource":"*"}}'
)
response = iam.put_role_policy(
RoleName=task_role_name,
PolicyName='glue_job',
PolicyDocument='{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"glue:StartJobRun","Resource":"*"}}'
)
response = iam.put_role_policy(
RoleName=task_role_name,
PolicyName='glue_job_poll',
PolicyDocument='{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"glue:GetJobRun","Resource":"*"}}'
)
response = iam.put_role_policy(
RoleName=task_role_name,
PolicyName='send_sm_fail',
PolicyDocument='{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"sagemaker:*","Resource":"*"}}'
)
response = iam.put_role_policy(
RoleName=task_role_name,
PolicyName='send_sm_success',
PolicyDocument='{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"sagemaker:*","Resource":"*"}}'
)
###Output
_____no_output_____
###Markdown
Next, we wll create and register the task using the roles we create above...
###Code
region = boto3.Session().region_name
response = ecs.register_task_definition(
family='FargateTaskRunner',
taskRoleArn=task_role_arn,
executionRoleArn=ecs_role_arn,
networkMode='awsvpc',
containerDefinitions=[
{
'name': 'FargateTask',
'image': task_uri,
'cpu': 512,
'memory': 1024,
'essential': True,
'environment': [
{
'name': 'inputLocation',
'value': 'temp'
},
{
'name': 'outputLocation',
'value': 'temp'
}
],
'logConfiguration': {
'logDriver': 'awslogs',
'options': {
'awslogs-create-group': 'true',
'awslogs-group': 'emr_sg_pipeline',
'awslogs-region': region,
'awslogs-stream-prefix': 'task'
},
},
},
],
requiresCompatibilities=[
'FARGATE',
],
cpu='512',
memory='1024'
)
print(f"Task definition ARN: {response['taskDefinition']['taskDefinitionArn']}")
task_arn = response['taskDefinition']['taskDefinitionArn']
###Output
_____no_output_____
###Markdown
Copy data to our bucketNext, we'll copy the 2020 taxi data to the sagemaker session default bucket breaking up the data per month.
###Code
s3 = boto3.client('s3')
import sagemaker
sagemaker_session = sagemaker.session.Session()
default_bucket = sagemaker_session.default_bucket()
taxi_bucket = 'nyc-tlc'
taxi_prefix = 'taxi'
for month in ['01','02','03','04','05','06','07','08','09','10','11','12']:
copy_source = {
'Bucket': taxi_bucket,
'Key': f"trip data/yellow_tripdata_2020-{month}.csv"
}
s3.copy(copy_source, default_bucket, f"{taxi_prefix}/yellow_tripdata_2020-{month}.csv")
default_bucket
###Output
_____no_output_____
###Markdown
Create SQS queue for pipelineIn this step, we'll create the SQS queue that will be used by the CallbackStep inside SageMaker Pipeline steps. SageMaker Pipelines will put a token to this queue that will serve as a trigger for your Lambda function which will initiate the Fargate task to process your data.
###Code
sqs_client = boto3.client('sqs')
queue_url = ''
queue_name = 'pipeline_callbacks_glue_prep'
try:
response = sqs_client.create_queue(QueueName=queue_name)
except:
print(f"Failed to create queue")
###Output
_____no_output_____
###Markdown
Format the queue URL to the same format we will need later on.
###Code
queue_url = f"https://sqs.{region}.amazonaws.com/{account}/{queue_name}"
queue_url
###Output
_____no_output_____
###Markdown
VPC and security settingsFor this setup, we'll use the default VPC and all of its subnets for the fargate task. However, we'll create a new security group for the tasks that allows egress but no ingress.
###Code
ec2 = boto3.client('ec2')
response = ec2.describe_vpcs(
Filters=[
{
'Name': 'isDefault',
'Values': [
'true'
]
}
]
)
default_vpc_id = response['Vpcs'][0]['VpcId']
response = ec2.describe_subnets(
Filters=[
{
'Name': 'vpc-id',
'Values': [
default_vpc_id
]
}
]
)
task_subnets = []
for r in response['Subnets']:
task_subnets.append(r['SubnetId'])
response = ec2.create_security_group(
Description='Security group for Fargate tasks',
GroupName='fg_task_sg',
VpcId=default_vpc_id
)
sg_id = response['GroupId']
response = ec2.authorize_security_group_ingress(
GroupId=sg_id,
IpPermissions=[
{
'FromPort': 0,
'IpProtocol': '-1',
'UserIdGroupPairs': [
{
'GroupId': sg_id,
'Description': 'local SG ingress'
},
],
'ToPort': 65535
},
]
)
###Output
_____no_output_____
###Markdown
Create ETL scriptThe ETL job will take two arguments, the location of the input data in S3 and the output path in S3.
###Code
%%writefile etl.py
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from pyspark.sql.types import IntegerType
from pyspark.sql import functions as F
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME', 'input_uri', 'output_uri'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
df = spark.read.format("csv").option("header", "true").load("{0}*.csv".format(args['input_uri']))
df = df.withColumn("Passengers", df["passenger_count"].cast(IntegerType()))
df = df.withColumn(
'pickup_time',
F.to_timestamp(
F.unix_timestamp('tpep_pickup_datetime', 'yyyy-MM-dd HH:mm:ss').cast('timestamp')))
dfW = df.groupBy(F.window("pickup_time", "30 minutes")).agg(F.sum("Passengers").alias("passenger"))
dfOut = dfW.drop('window')
dfOut.repartition(1).write.option("timestampFormat", "yyyy-MM-dd HH:mm:ss").csv(args['output_uri'])
job.commit()
s3.upload_file('etl.py', default_bucket, 'pipeline/etl.py')
glue_script_location = f"s3://{default_bucket}/pipeline/etl.py"
glue_script_location
###Output
_____no_output_____
###Markdown
Create ETL jobFirst, the Glue job needs permission to read and write from our S3 bucket.
###Code
response = iam.create_role(
RoleName='glue_pipeline_role',
AssumeRolePolicyDocument='''{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}''',
Description='Role for Glue ETL job',
)
glue_role_arn = response['Role']['Arn']
glue_role_name = response['Role']['RoleName']
response = iam.attach_role_policy(
RoleName=glue_role_name,
PolicyArn='arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole'
)
response = iam.put_role_policy(
RoleName=glue_role_name,
PolicyName='glue_s3',
PolicyDocument='{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"s3:*","Resource":"arn:aws:s3:::' + default_bucket + '"}}'
)
response = iam.put_role_policy(
RoleName=glue_role_name,
PolicyName='glue_s3',
PolicyDocument='{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"s3:*","Resource":"arn:aws:s3:::' + default_bucket + '/*"}}'
)
###Output
_____no_output_____
###Markdown
Next, we'll create the glue job using the script and roles creates in the prevous steps...
###Code
glue = boto3.client('glue')
response = glue.create_job(
Name='GlueDataPrepForPipeline',
Description='Prepare data for SageMaker training',
Role=glue_role_arn,
ExecutionProperty={
'MaxConcurrentRuns': 1
},
Command={
'Name': 'glueetl',
'ScriptLocation': glue_script_location,
},
MaxRetries=0,
Timeout=60,
MaxCapacity=10.0,
GlueVersion='2.0'
)
glue_job_name = response['Name']
glue_job_name
###Output
_____no_output_____
###Markdown
Create Lambda functionThe Lambda function will be triggered on new messages to the SQS queue create by the CallbackStep in SageMaker Pipelines. The Lambda function is responsible for initiating the run of your Fargate task. The Lammbda function needs permission to run an ECS task, send the failure callback if something goes wrong, and poll SQS.
###Code
response = iam.create_role(
RoleName='lambda_sm_pipeline_role',
AssumeRolePolicyDocument='''{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}''',
Description='Role for Lambda to call ECS Fargate task',
)
lambda_role_arn = response['Role']['Arn']
lambda_role_name = response['Role']['RoleName']
response = iam.attach_role_policy(
RoleName=lambda_role_name,
PolicyArn='arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
)
response = iam.put_role_policy(
RoleName=lambda_role_name,
PolicyName='run_ecs_task',
PolicyDocument='{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"ecs:RunTask","Resource":"*"}}'
)
response = iam.put_role_policy(
RoleName=lambda_role_name,
PolicyName='send_sm_fail',
PolicyDocument='{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"sagemaker:*","Resource":"*"}}'
)
response = iam.put_role_policy(
RoleName=lambda_role_name,
PolicyName='poll_sqs',
PolicyDocument='{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"sqs:*","Resource":"*"}}'
)
response = iam.put_role_policy(
RoleName=lambda_role_name,
PolicyName='pass_ecs_role',
PolicyDocument='{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"iam:PassRole","Resource":"' + ecs_role_arn + '"}}'
)
response = iam.put_role_policy(
RoleName=lambda_role_name,
PolicyName='pass_task_role',
PolicyDocument='{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"iam:PassRole","Resource":"' + task_role_arn + '"}}'
)
###Output
_____no_output_____
###Markdown
Next, write the code that will be used in the Lambda function..
###Code
%%writefile queue_handler.py
import json
import boto3
import os
import traceback
ecs = boto3.client('ecs')
sagemaker = boto3.client('sagemaker')
def handler(event, context):
print(f"Got event: {json.dumps(event)}")
cluster_arn = os.environ["cluster_arn"]
task_arn = os.environ["task_arn"]
task_subnets = os.environ["task_subnets"]
task_sgs = os.environ["task_sgs"]
glue_job_name = os.environ["glue_job_name"]
print(f"Cluster ARN: {cluster_arn}")
print(f"Task ARN: {task_arn}")
print(f"Task Subnets: {task_subnets}")
print(f"Task SG: {task_sgs}")
print(f"Glue job name: {glue_job_name}")
for record in event['Records']:
payload = json.loads(record["body"])
print(f"Processing record {payload}")
token = payload["token"]
print(f"Got token {token}")
try:
input_data_s3_uri = payload["arguments"]["input_location"]
output_data_s3_uri = payload["arguments"]["output_location"]
print(f"Got input_data_s3_uri {input_data_s3_uri}")
print(f"Got output_data_s3_uri {output_data_s3_uri}")
response = ecs.run_task(
cluster = cluster_arn,
count=1,
launchType='FARGATE',
taskDefinition=task_arn,
networkConfiguration={
'awsvpcConfiguration': {
'subnets': task_subnets.split(','),
'securityGroups': task_sgs.split(','),
'assignPublicIp': 'ENABLED'
}
},
overrides={
'containerOverrides': [
{
'name': 'FargateTask',
'environment': [
{
'name': 'inputLocation',
'value': input_data_s3_uri
},
{
'name': 'outputLocation',
'value': output_data_s3_uri
},
{
'name': 'token',
'value': token
},
{
'name': 'glue_job_name',
'value': glue_job_name
}
]
}
]
}
)
if 'failures' in response and len(response['failures']) > 0:
f = response['failures'][0]
print(f"Failed to launch task for token {token}: {f['reason']}")
sagemaker.send_step_failure(
CallbackToken=token,
FailureReason = f['reason']
)
else:
print(f"Launched task {response['tasks'][0]['taskArn']}")
except Exception as e:
trc = traceback.format_exc()
print(f"Error handling record: {str(e)}:\m {trc}")
sagemaker.send_step_failure(
CallbackToken=token,
FailureReason = e
)
###Output
_____no_output_____
###Markdown
Finally, bundle the code and upload it to S3 then create the Lambda function...
###Code
import zipfile
archive = zipfile.ZipFile('queue_handler.zip', 'w')
archive.write('queue_handler.py')
s3 = boto3.client('s3')
s3.upload_file('queue_handler.zip', default_bucket, 'pipeline/queue_handler.zip')
lambda_client = boto3.client('lambda')
lambda_client.create_function(
Code={
'S3Bucket': default_bucket,
'S3Key': 'pipeline/queue_handler.zip',
},
FunctionName='SMPipelineQueueHandler',
Description='Process Glue callback messages from SageMaker Pipelines',
Handler='queue_handler.handler',
Publish=True,
Role=lambda_role_arn,
Runtime='python3.7',
Timeout=20,
MemorySize=128,
PackageType='Zip',
Environment= {
'Variables': {
'cluster_arn': cluster_arn,
'task_arn': task_arn,
'task_subnets': ",".join(task_subnets),
'task_sgs': sg_id,
'glue_job_name': glue_job_name
}
}
)
###Output
_____no_output_____
###Markdown
Set up Lambda as SQS targetNext, we'll attach the lambda function created above to the SQS queue we previously created. This ensures that your Lambda will be triggered when new messages are put to your SQS queue.
###Code
lambda_client.create_event_source_mapping(
EventSourceArn=f'arn:aws:sqs:{region}:{account}:{queue_name}',
FunctionName='SMPipelineQueueHandler',
Enabled=True,
BatchSize=10
)
###Output
_____no_output_____
###Markdown
Build & Execute SageMaker PipelineNow that all of the components are created and configured that support the tasks within your pipeline steps, we're now ready to bring it all together and setup the pipeline. Pipeline Initialization
###Code
import time
timestamp = int(time.time())
from sagemaker.workflow.parameters import (
ParameterInteger,
ParameterString,
)
input_data = ParameterString(
name="InputData",
default_value=f"s3://{default_bucket}/{taxi_prefix}/"
)
id_out = ParameterString(
name="IdOut",
default_value="taxiout"+ str(timestamp)
)
output_data = ParameterString(
name="OutputData",
default_value=f"s3://{default_bucket}/{taxi_prefix}_output/"
)
training_instance_count = ParameterInteger(
name="TrainingInstanceCount",
default_value=1
)
training_instance_type = ParameterString(
name="TrainingInstanceType",
default_value="ml.c5.xlarge"
)
###Output
_____no_output_____
###Markdown
Pipeline Steps 1 - Call Back Step First, we'll configure the callback step. The callback step will accept the following **inputs**: * S3 location of our raw taxi data * SQS queue The callback step will return the following **outputs**: * S3 location of processed data to be used for model training
###Code
from sagemaker.workflow.callback_step import CallbackStep,CallbackOutput,CallbackOutputTypeEnum
callback1_output=CallbackOutput(output_name="s3_data_out", output_type=CallbackOutputTypeEnum.String)
step_callback_data = CallbackStep(
name="GluePrepCallbackStep",
sqs_queue_url=queue_url,
inputs={
"input_location": f"s3://{default_bucket}/{taxi_prefix}/",
"output_location": f"s3://{default_bucket}/{taxi_prefix}_{id_out}/"
},
outputs=[
callback1_output
],
)
###Output
_____no_output_____
###Markdown
2 - Training Step Next, we'll configure the training step by first configuring the estimator for random cut forest. Then, we'll configure the training step. The training step will accept the following **inputs**: * S3 location of processed data to be used for model training * ECR containing the training image for rcf * Estimator configuration The training step will return the following **outputs**: * S3 location of the trained model artifact
###Code
containers = {
'us-west-2': '174872318107.dkr.ecr.us-west-2.amazonaws.com/randomcutforest:latest',
'us-east-1': '382416733822.dkr.ecr.us-east-1.amazonaws.com/randomcutforest:latest',
'us-east-2': '404615174143.dkr.ecr.us-east-2.amazonaws.com/randomcutforest:latest',
'eu-west-1': '438346466558.dkr.ecr.eu-west-1.amazonaws.com/randomcutforest:latest'}
region_name = boto3.Session().region_name
container = containers[region_name]
model_prefix = 'model'
session = sagemaker.Session()
rcf = sagemaker.estimator.Estimator(
container,
sagemaker.get_execution_role(),
output_path='s3://{}/{}/output'.format(default_bucket, model_prefix),
instance_count=training_instance_count,
instance_type=training_instance_type,
sagemaker_session=session)
rcf.set_hyperparameters(
num_samples_per_tree=200,
num_trees=50,
feature_dim=1)
from sagemaker.inputs import TrainingInput
from sagemaker.workflow.steps import TrainingStep
step_train = TrainingStep(
name="TrainModel",
estimator=rcf,
inputs={
"train": TrainingInput(
# s3_data = Output of the previous call back step
s3_data=step_callback_data.properties.Outputs['s3_data_out'],
content_type="text/csv;label_size=0",
distribution='ShardedByS3Key'
),
},
)
###Output
_____no_output_____
###Markdown
3 - Create ModelNext, we'll package the trained model for deployment. The create model step will accept the following **inputs**: * S3 location of the trained model artifact * ECR containing the inference image for rcf The create model step will return the following **outputs**: * SageMaker packaged model
###Code
from sagemaker.model import Model
from sagemaker import get_execution_role
role = get_execution_role()
image_uri = sagemaker.image_uris.retrieve("randomcutforest", region)
model = Model(
image_uri=image_uri,
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
sagemaker_session=sagemaker_session,
role=role,
)
from sagemaker.inputs import CreateModelInput
from sagemaker.workflow.steps import CreateModelStep
inputs = CreateModelInput(
instance_type="ml.m5.large",
)
create_model = CreateModelStep(
name="TaxiModel",
model=model,
inputs=inputs,
)
###Output
_____no_output_____
###Markdown
4 - Batch TransformNext, we'll deploy the model using batch transform then do a quick evaluation with our data to compute anomaly scores for each of our data points on input. The batch transform step will accept the following **inputs**: * SageMaker packaged model * S3 location of the input data * ECR containing the inference image for rcf The batch transform step will return the following **outputs**: * S3 location of the output data (anomaly scores)
###Code
base_uri = step_callback_data.properties.Outputs['s3_data_out']
output_prefix = 'batch-out'
from sagemaker.transformer import Transformer
transformer = Transformer(
model_name=create_model.properties.ModelName,
instance_type="ml.m5.xlarge",
assemble_with = "Line",
accept = 'text/csv',
instance_count=1,
output_path=f"s3://{default_bucket}/{output_prefix}/",
)
from sagemaker.inputs import TransformInput
from sagemaker.workflow.steps import TransformStep
batch_data=step_callback_data.properties.Outputs['s3_data_out']
step_transform = TransformStep(
name="TaxiTransform",
transformer=transformer,
inputs=TransformInput(data=batch_data,content_type="text/csv",split_type="Line",input_filter="$[0]",join_source='Input',output_filter='$[0,-1]')
)
###Output
_____no_output_____
###Markdown
Configure Pipeline Using Created Steps
###Code
import uuid
id_out = uuid.uuid4().hex
print('Unique ID:', id_out)
from sagemaker.workflow.pipeline import Pipeline
pipeline_name = f"GluePipeline-{id_out}"
pipeline = Pipeline(
name=pipeline_name,
parameters=[
input_data,
training_instance_type,
training_instance_count,
id_out,
],
steps=[step_callback_data, step_train,create_model,step_transform],
)
from sagemaker import get_execution_role
pipeline.upsert(role_arn = get_execution_role())
import json
definition = json.loads(pipeline.definition())
definition
###Output
_____no_output_____
###Markdown
Execute Pipeline
###Code
execution = pipeline.start(
# parameters=dict(
# IdOut=id_out
# )
)
execution.describe()
execution.list_steps()
###Output
_____no_output_____
###Markdown
Glue ETL as part of a SageMaker pipelineThis notebook will show how to use the [Callback Step](https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-steps.htmlstep-type-callback) to extend your SageMaker Pipeline steps to include tasks performed by other AWS services or custom integrations. For this notebook, you'll learn how to include a Glue ETL job as part of a SageMaker ML pipeline. The overall flow will be:* Define Glue ETL job* Run Spark data preparation job in Glue* Run ML training job on SageMaker* Evaluate ML model performance The pipeline sends a message to an SQS queue. A Lambda function responds to SQS and invokes an ECS Fargate task. The task will handle running the Spark job and monitoring for progress. It'll then send the callback token back to the pipeline.![CustomStepPipeline](./images/pipelinescustom.png) Data setWe'll use the Yellow Taxi records from [NYC](https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page) in 2020. In this [blog](https://aws.amazon.com/blogs/machine-learning/use-the-built-in-amazon-sagemaker-random-cut-forest-algorithm-for-anomaly-detection/), we used a prepared version of the data that had passenger counts per half hour. In this notebook we'll take the raw NYC data and prepare the half-hour totals. One-time setupThis notebook needs permissions to:* Create Lambda functions* Create an ECS cluster* Upload images to ECR* Create IAM roles* Invoke SageMaker API for pipelines* Create security groups* Write data into S3* Create security groups* Describe VPC informationIn a production setting, we would deploy a lot of these resources using an infrastructure-as-code tool like CloudFormation or the CDK. But for simplicity in this demo we'll create everything in this notebook. Setup prerequisite IAM roles First we need to create the following IAM roles:* A role for the ECS Fargate task and task runner. Besides the usual policies that allow pulling images and creating logs, the task needs permission to start and monitor a Glue job, and send the callback token to SageMaker. Because the specific SageMaker action isn't visible in IAM yet, for now we give the task full SageMaker permissions.* A role for Glue with permissions to read and write from our S3 bucket.* A role for Lambda with permissions to run an ECS task, send the failure callback if something goes wrong, and poll SQS.For your convenience, we have prepared the setup_iam_roles.py script to help create the IAM roles and respective policies. In most cases, this script will be run by administrator teams, on behalf of data scientists.
###Code
import sagemaker
from setup_iam_roles import create_glue_pipeline_role
from setup_iam_roles import create_lambda_sm_pipeline_role
from setup_iam_roles import create_ecs_task_role, create_task_runner_role
sagemaker_session = sagemaker.session.Session()
default_bucket = sagemaker_session.default_bucket()
ecs_role_arn = create_ecs_task_role(role_name="fg_task_pipeline_role")
task_role_arn = create_task_runner_role(role_name="fg_task_runner_pipeline_role")
glue_role_arn = create_glue_pipeline_role(role_name="glue_pipeline_role", bucket=default_bucket)
lambda_role_arn = create_lambda_sm_pipeline_role(
role_name="lambda_sm_pipeline_role", ecs_role_arn=ecs_role_arn, task_role_arn=task_role_arn
)
###Output
_____no_output_____
###Markdown
ProcessingSetup the configurations & tasks that will be used to process data in the pipeline. Set up ECS Fargate clusterThe ECS Fargate cluster will be used to execute a Fargate task that will handle running the Spark data pre-processing in Glue and monitoring for progress. This task is invoked by a Lambda function that gets called whenever the CallbackStep puts a message to SQS.**Pipeline Step Tasks:** *CallbackStep -> SQS -> Lambda -> Fargate Task -> Glue Job*
###Code
import boto3
ecs = boto3.client("ecs")
response = ecs.create_cluster(clusterName="FargateTaskRunner")
print(f"Cluster Name: {response['cluster']['clusterName']}")
print(f"Cluster ARN: {response['cluster']['clusterArn']}")
print(f"Cluster Status: {response['cluster']['status']}")
cluster_arn = response["cluster"]["clusterArn"]
###Output
_____no_output_____
###Markdown
Build container image for Fargate taskFirst, install the Amazon SageMaker Studio Build CLI convenience package that allows you to build docker images from your Studio environment. Please ensure you have the pre-requisites in place as outlined in this [blog](https://aws.amazon.com/blogs/machine-learning/using-the-amazon-sagemaker-studio-image-build-cli-to-build-container-images-from-your-studio-notebooks/).
###Code
import sys
!{sys.executable} -m pip install sagemaker_studio_image_build
###Output
_____no_output_____
###Markdown
Next, write the code to your local environment that will be used to build the docker image. **task.py:** This code will be used by the task runner to start and monitor the Glue job then report status back to SageMaker Pipelines via *send_pipeline_execution_step_success* or *send_pipeline_execution_step_failure*
###Code
!mkdir container
%%writefile container/task.py
import boto3
import os
import sys
import traceback
import time
if "inputLocation" in os.environ:
input_uri = os.environ["inputLocation"]
else:
print("inputLocation not found in environment")
sys.exit(1)
if "outputLocation" in os.environ:
output_uri = os.environ["outputLocation"]
else:
print("outputLocation not found in environment")
sys.exit(1)
if "token" in os.environ:
token = os.environ["token"]
else:
print("token not found in environment")
sys.exit(1)
if "glue_job_name" in os.environ:
glue_job_name = os.environ["glue_job_name"]
else:
print("glue_job_name not found in environment")
sys.exit(1)
print(f"Processing from {input_uri} to {output_uri} using callback token {token}")
sagemaker = boto3.client("sagemaker")
glue = boto3.client("glue")
poll_interval = 60
try:
t1 = time.time()
response = glue.start_job_run(
JobName=glue_job_name, Arguments={"--output_uri": output_uri, "--input_uri": input_uri}
)
job_run_id = response["JobRunId"]
print(f"Starting job {job_run_id}")
job_status = "STARTING"
job_error = ""
while job_status in ["STARTING", "RUNNING", "STOPPING"]:
time.sleep(poll_interval)
response = glue.get_job_run(
JobName=glue_job_name, RunId=job_run_id, PredecessorsIncluded=False
)
job_status = response["JobRun"]["JobRunState"]
if "ErrorMessage" in response["JobRun"]:
job_error = response["JobRun"]["ErrorMessage"]
print(f"Job is in state {job_status}")
t2 = time.time()
total_time = (t2 - t1) / 60.0
if job_status == "SUCCEEDED":
print("Job succeeded")
sagemaker.send_pipeline_execution_step_success(
CallbackToken=token,
OutputParameters=[
{"Name": "minutes", "Value": str(total_time)},
{
"Name": "s3_data_out",
"Value": str(output_uri),
},
],
)
else:
print(f"Job failed: {job_error}")
sagemaker.send_pipeline_execution_step_failure(CallbackToken=token, FailureReason=job_error)
except Exception as e:
trc = traceback.format_exc()
print(f"Error running ETL job: {str(e)}:\m {trc}")
sagemaker.send_pipeline_execution_step_failure(CallbackToken=token, FailureReason=str(e))
###Output
_____no_output_____
###Markdown
Next, write the code for your Dockerfile...
###Code
%%writefile container/Dockerfile
#FROM ubuntu:18.04
FROM public.ecr.aws/ubuntu/ubuntu:latest
RUN apt-get -y update && apt-get install -y --no-install-recommends \
python3-pip \
python3-setuptools \
curl \
unzip
RUN /usr/bin/pip3 install boto3
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
COPY task.py /opt
CMD /usr/bin/python3 /opt/task.py
###Output
_____no_output_____
###Markdown
Finally, use the studio image build CLI to build and push your image to ECR
###Code
%%sh
cd container
sm-docker build . --repository ecs-fargate-task:latest
###Output
_____no_output_____
###Markdown
After building the image, you have to grab the ECR URI and define a local notebook variable that holds it in the last cell in this section.
###Code
import sagemaker as sage
sess = sage.Session()
account = sess.boto_session.client("sts").get_caller_identity()["Account"]
region = boto3.session.Session().region_name
task_uri = "{}.dkr.ecr.{}.amazonaws.com/ecs-fargate-task".format(account, region)
print("URI:", task_uri)
###Output
_____no_output_____
###Markdown
Set up ECS Fargate taskNow we will create and register the task using the roles we create above...
###Code
region = boto3.Session().region_name
response = ecs.register_task_definition(
family="FargateTaskRunner",
taskRoleArn=task_role_arn,
executionRoleArn=ecs_role_arn,
networkMode="awsvpc",
containerDefinitions=[
{
"name": "FargateTask",
"image": task_uri,
"cpu": 512,
"memory": 1024,
"essential": True,
"environment": [
{"name": "inputLocation", "value": "temp"},
{"name": "outputLocation", "value": "temp"},
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-group": "glue_sg_pipeline",
"awslogs-region": region,
"awslogs-stream-prefix": "task",
},
},
},
],
requiresCompatibilities=[
"FARGATE",
],
cpu="512",
memory="1024",
)
print(f"Task definition ARN: {response['taskDefinition']['taskDefinitionArn']}")
task_arn = response["taskDefinition"]["taskDefinitionArn"]
###Output
_____no_output_____
###Markdown
Copy data to our bucketNext, we'll copy the 2020 taxi data to the sagemaker session default bucket breaking up the data per month.
###Code
s3 = boto3.client("s3")
taxi_bucket = "nyc-tlc"
taxi_prefix = "taxi"
for month in ["01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12"]:
copy_source = {"Bucket": taxi_bucket, "Key": f"trip data/yellow_tripdata_2020-{month}.csv"}
s3.copy(copy_source, default_bucket, f"{taxi_prefix}/yellow_tripdata_2020-{month}.csv")
default_bucket
###Output
_____no_output_____
###Markdown
Create SQS queue for pipelineIn this step, we'll create the SQS queue that will be used by the CallbackStep inside SageMaker Pipeline steps. SageMaker Pipelines will put a token to this queue that will serve as a trigger for your Lambda function which will initiate the Fargate task to process your data.
###Code
sqs_client = boto3.client("sqs")
queue_url = ""
queue_name = "pipeline_callbacks_glue_prep"
try:
response = sqs_client.create_queue(QueueName=queue_name)
except:
print(f"Failed to create queue")
###Output
_____no_output_____
###Markdown
Format the queue URL to the same format we will need later on.
###Code
queue_url = f"https://sqs.{region}.amazonaws.com/{account}/{queue_name}"
queue_url
###Output
_____no_output_____
###Markdown
VPC and security settingsFor this setup, we'll use the default VPC and all of its subnets for the fargate task. However, we'll create a new security group for the tasks that allows egress but no ingress.
###Code
ec2 = boto3.client("ec2")
response = ec2.describe_vpcs(Filters=[{"Name": "isDefault", "Values": ["true"]}])
default_vpc_id = response["Vpcs"][0]["VpcId"]
response = ec2.describe_subnets(Filters=[{"Name": "vpc-id", "Values": [default_vpc_id]}])
task_subnets = []
for r in response["Subnets"]:
task_subnets.append(r["SubnetId"])
response = ec2.create_security_group(
Description="Security group for Fargate tasks", GroupName="fg_task_sg", VpcId=default_vpc_id
)
sg_id = response["GroupId"]
response = ec2.authorize_security_group_ingress(
GroupId=sg_id,
IpPermissions=[
{
"FromPort": 0,
"IpProtocol": "-1",
"UserIdGroupPairs": [
{"GroupId": sg_id, "Description": "local SG ingress"},
],
"ToPort": 65535,
},
],
)
###Output
_____no_output_____
###Markdown
Create ETL scriptThe ETL job will take two arguments, the location of the input data in S3 and the output path in S3.
###Code
%%writefile etl.py
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from pyspark.sql.types import IntegerType
from pyspark.sql import functions as F
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ["JOB_NAME", "input_uri", "output_uri"])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args["JOB_NAME"], args)
df = spark.read.format("csv").option("header", "true").load("{0}*.csv".format(args["input_uri"]))
df = df.withColumn("Passengers", df["passenger_count"].cast(IntegerType()))
df = df.withColumn(
"pickup_time",
F.to_timestamp(
F.unix_timestamp("tpep_pickup_datetime", "yyyy-MM-dd HH:mm:ss").cast("timestamp")
),
)
dfW = df.groupBy(F.window("pickup_time", "30 minutes")).agg(F.sum("Passengers").alias("passenger"))
dfOut = dfW.drop("window")
dfOut.repartition(1).write.option("timestampFormat", "yyyy-MM-dd HH:mm:ss").csv(args["output_uri"])
job.commit()
s3.upload_file("etl.py", default_bucket, "pipeline/etl.py")
glue_script_location = f"s3://{default_bucket}/pipeline/etl.py"
glue_script_location
###Output
_____no_output_____
###Markdown
Create ETL jobNext, we'll create the glue job using the script and roles creates in the prevous steps...
###Code
glue = boto3.client("glue")
response = glue.create_job(
Name="GlueDataPrepForPipeline",
Description="Prepare data for SageMaker training",
Role=glue_role_arn,
ExecutionProperty={"MaxConcurrentRuns": 1},
Command={
"Name": "glueetl",
"ScriptLocation": glue_script_location,
},
MaxRetries=0,
Timeout=60,
MaxCapacity=10.0,
GlueVersion="2.0",
)
glue_job_name = response["Name"]
glue_job_name
###Output
_____no_output_____
###Markdown
Create Lambda functionThe Lambda function will be triggered on new messages to the SQS queue create by the CallbackStep in SageMaker Pipelines. The Lambda function is responsible for initiating the run of your Fargate task. Now, write the code that will be used in the Lambda function..
###Code
%%writefile queue_handler.py
import json
import boto3
import os
import traceback
ecs = boto3.client("ecs")
sagemaker = boto3.client("sagemaker")
def handler(event, context):
print(f"Got event: {json.dumps(event)}")
cluster_arn = os.environ["cluster_arn"]
task_arn = os.environ["task_arn"]
task_subnets = os.environ["task_subnets"]
task_sgs = os.environ["task_sgs"]
glue_job_name = os.environ["glue_job_name"]
print(f"Cluster ARN: {cluster_arn}")
print(f"Task ARN: {task_arn}")
print(f"Task Subnets: {task_subnets}")
print(f"Task SG: {task_sgs}")
print(f"Glue job name: {glue_job_name}")
for record in event["Records"]:
payload = json.loads(record["body"])
print(f"Processing record {payload}")
token = payload["token"]
print(f"Got token {token}")
try:
input_data_s3_uri = payload["arguments"]["input_location"]
output_data_s3_uri = payload["arguments"]["output_location"]
print(f"Got input_data_s3_uri {input_data_s3_uri}")
print(f"Got output_data_s3_uri {output_data_s3_uri}")
response = ecs.run_task(
cluster=cluster_arn,
count=1,
launchType="FARGATE",
taskDefinition=task_arn,
networkConfiguration={
"awsvpcConfiguration": {
"subnets": task_subnets.split(","),
"securityGroups": task_sgs.split(","),
"assignPublicIp": "ENABLED",
}
},
overrides={
"containerOverrides": [
{
"name": "FargateTask",
"environment": [
{"name": "inputLocation", "value": input_data_s3_uri},
{"name": "outputLocation", "value": output_data_s3_uri},
{"name": "token", "value": token},
{"name": "glue_job_name", "value": glue_job_name},
],
}
]
},
)
if "failures" in response and len(response["failures"]) > 0:
f = response["failures"][0]
print(f"Failed to launch task for token {token}: {f['reason']}")
sagemaker.send_step_failure(CallbackToken=token, FailureReason=f["reason"])
else:
print(f"Launched task {response['tasks'][0]['taskArn']}")
except Exception as e:
trc = traceback.format_exc()
print(f"Error handling record: {str(e)}:\m {trc}")
sagemaker.send_step_failure(CallbackToken=token, FailureReason=e)
###Output
_____no_output_____
###Markdown
Finally, bundle the code and upload it to S3 then create the Lambda function...
###Code
import zipfile
archive = zipfile.ZipFile("queue_handler.zip", "w")
archive.write("queue_handler.py")
s3 = boto3.client("s3")
s3.upload_file("queue_handler.zip", default_bucket, "pipeline/queue_handler.zip")
lambda_client = boto3.client("lambda")
lambda_client.create_function(
Code={
"S3Bucket": default_bucket,
"S3Key": "pipeline/queue_handler.zip",
},
FunctionName="SMPipelineQueueHandler",
Description="Process Glue callback messages from SageMaker Pipelines",
Handler="queue_handler.handler",
Publish=True,
Role=lambda_role_arn,
Runtime="python3.7",
Timeout=20,
MemorySize=128,
PackageType="Zip",
Environment={
"Variables": {
"cluster_arn": cluster_arn,
"task_arn": task_arn,
"task_subnets": ",".join(task_subnets),
"task_sgs": sg_id,
"glue_job_name": glue_job_name,
}
},
)
###Output
_____no_output_____
###Markdown
Set up Lambda as SQS targetNext, we'll attach the lambda function created above to the SQS queue we previously created. This ensures that your Lambda will be triggered when new messages are put to your SQS queue.
###Code
lambda_client.create_event_source_mapping(
EventSourceArn=f"arn:aws:sqs:{region}:{account}:{queue_name}",
FunctionName="SMPipelineQueueHandler",
Enabled=True,
BatchSize=10,
)
###Output
_____no_output_____
###Markdown
Build & Execute SageMaker PipelineNow that all of the components are created and configured that support the tasks within your pipeline steps, we're now ready to bring it all together and setup the pipeline. First, install the SageMaker Python SDK.
###Code
!pip install "sagemaker==2.91.1"
###Output
_____no_output_____
###Markdown
Pipeline Initialization
###Code
import time
timestamp = int(time.time())
from sagemaker.workflow.parameters import (
ParameterInteger,
ParameterString,
)
input_data = ParameterString(
name="InputData", default_value=f"s3://{default_bucket}/{taxi_prefix}/"
)
id_out = ParameterString(name="IdOut", default_value="taxiout" + str(timestamp))
output_data = ParameterString(
name="OutputData", default_value=f"s3://{default_bucket}/{taxi_prefix}_output/"
)
training_instance_count = ParameterInteger(name="TrainingInstanceCount", default_value=1)
training_instance_type = ParameterString(name="TrainingInstanceType", default_value="ml.c5.xlarge")
###Output
_____no_output_____
###Markdown
Pipeline Steps 1 - Call Back Step First, we'll configure the callback step. The callback step will accept the following **inputs**: * S3 location of our raw taxi data * SQS queue The callback step will return the following **outputs**: * S3 location of processed data to be used for model training
###Code
from sagemaker.workflow.callback_step import CallbackStep, CallbackOutput, CallbackOutputTypeEnum
callback1_output = CallbackOutput(
output_name="s3_data_out", output_type=CallbackOutputTypeEnum.String
)
step_callback_data = CallbackStep(
name="GluePrepCallbackStep",
sqs_queue_url=queue_url,
inputs={
"input_location": f"s3://{default_bucket}/{taxi_prefix}/",
"output_location": f"s3://{default_bucket}/{taxi_prefix}_{id_out}/",
},
outputs=[callback1_output],
)
###Output
_____no_output_____
###Markdown
2 - Training Step Next, we'll configure the training step by first configuring the estimator for random cut forest. Then, we'll configure the training step. The training step will accept the following **inputs**: * S3 location of processed data to be used for model training * ECR containing the training image for rcf * Estimator configuration The training step will return the following **outputs**: * S3 location of the trained model artifact
###Code
containers = {
"us-west-2": "174872318107.dkr.ecr.us-west-2.amazonaws.com/randomcutforest:latest",
"us-east-1": "382416733822.dkr.ecr.us-east-1.amazonaws.com/randomcutforest:latest",
"us-east-2": "404615174143.dkr.ecr.us-east-2.amazonaws.com/randomcutforest:latest",
"eu-west-1": "438346466558.dkr.ecr.eu-west-1.amazonaws.com/randomcutforest:latest",
}
region_name = boto3.Session().region_name
container = containers[region_name]
model_prefix = "model"
session = sagemaker.Session()
rcf = sagemaker.estimator.Estimator(
container,
sagemaker.get_execution_role(),
output_path="s3://{}/{}/output".format(default_bucket, model_prefix),
instance_count=training_instance_count,
instance_type=training_instance_type,
sagemaker_session=session,
)
rcf.set_hyperparameters(num_samples_per_tree=200, num_trees=50, feature_dim=1)
from sagemaker.inputs import TrainingInput
from sagemaker.workflow.steps import TrainingStep
step_train = TrainingStep(
name="TrainModel",
estimator=rcf,
inputs={
"train": TrainingInput(
# s3_data = Output of the previous call back step
s3_data=step_callback_data.properties.Outputs["s3_data_out"],
content_type="text/csv;label_size=0",
distribution="ShardedByS3Key",
),
},
)
###Output
_____no_output_____
###Markdown
3 - Create ModelNext, we'll package the trained model for deployment. The create model step will accept the following **inputs**: * S3 location of the trained model artifact * ECR containing the inference image for rcf The create model step will return the following **outputs**: * SageMaker packaged model
###Code
from sagemaker.model import Model
from sagemaker import get_execution_role
role = get_execution_role()
image_uri = sagemaker.image_uris.retrieve("randomcutforest", region)
model = Model(
image_uri=image_uri,
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
sagemaker_session=sagemaker_session,
role=role,
)
from sagemaker.inputs import CreateModelInput
from sagemaker.workflow.steps import CreateModelStep
inputs = CreateModelInput(
instance_type="ml.m5.large",
)
create_model = CreateModelStep(
name="TaxiModel",
model=model,
inputs=inputs,
)
###Output
_____no_output_____
###Markdown
4 - Batch TransformNext, we'll deploy the model using batch transform then do a quick evaluation with our data to compute anomaly scores for each of our data points on input. The batch transform step will accept the following **inputs**: * SageMaker packaged model * S3 location of the input data * ECR containing the inference image for rcf The batch transform step will return the following **outputs**: * S3 location of the output data (anomaly scores)
###Code
base_uri = step_callback_data.properties.Outputs["s3_data_out"]
output_prefix = "batch-out"
from sagemaker.transformer import Transformer
transformer = Transformer(
model_name=create_model.properties.ModelName,
instance_type="ml.m5.xlarge",
assemble_with="Line",
accept="text/csv",
instance_count=1,
output_path=f"s3://{default_bucket}/{output_prefix}/",
)
from sagemaker.inputs import TransformInput
from sagemaker.workflow.steps import TransformStep
batch_data = step_callback_data.properties.Outputs["s3_data_out"]
step_transform = TransformStep(
name="TaxiTransform",
transformer=transformer,
inputs=TransformInput(
data=batch_data,
content_type="text/csv",
split_type="Line",
input_filter="$[0]",
join_source="Input",
output_filter="$[0,-1]",
),
)
###Output
_____no_output_____
###Markdown
Configure Pipeline Using Created Steps
###Code
import uuid
id_out = uuid.uuid4().hex
print("Unique ID:", id_out)
from sagemaker.workflow.pipeline import Pipeline
pipeline_name = f"GluePipeline-{id_out}"
pipeline = Pipeline(
name=pipeline_name,
parameters=[
input_data,
training_instance_type,
training_instance_count,
id_out,
],
steps=[step_callback_data, step_train, create_model, step_transform],
)
from sagemaker import get_execution_role
pipeline.upsert(role_arn=get_execution_role())
import json
definition = json.loads(pipeline.definition())
definition
###Output
_____no_output_____
###Markdown
Execute Pipeline
###Code
execution = pipeline.start()
execution.describe()
execution.list_steps()
###Output
_____no_output_____ |
feature_repo/run.ipynb | ###Markdown
Generate Data Pandas dataframes
###Code
import pandas as pd
import numpy as np
from datetime import datetime, timezone
from sklearn.datasets import make_hastie_10_2
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
def generate_entities(size):
return np.random.choice(size, size=size, replace=False)
def generate_data(entities, year=2021, month=10, day=1) -> pd.DataFrame:
n_samples=len(entities)
X, y = make_hastie_10_2(n_samples=n_samples, random_state=0)
df = pd.DataFrame(X, columns=["f0", "f1", "f2", "f3", "f4", "f5", "f6", "f7", "f8", "f9"])
df["y"]=y
df['entity_id'] = entities
df['datetime'] = pd.to_datetime(
np.random.randint(
datetime(year, month, day, 0,tzinfo=timezone.utc).timestamp(),
datetime(year, month, day, 22,tzinfo=timezone.utc).timestamp(),
size=n_samples),
unit="s", #utc=True
)
df['created'] = pd.to_datetime(
datetime.now(), #utc=True
)
df['month_year'] = pd.to_datetime(datetime(year, month, day, 0, tzinfo=timezone.utc), utc=True)
return df
entities=generate_entities(1000000)
entity_df = pd.DataFrame(data=entities, columns=['entity_id'])
entity_df["event_timestamp"]=datetime(2021, 1, 14, 23, 59, 42, tzinfo=timezone.utc)
###Output
_____no_output_____
###Markdown
Create Delta Lake
###Code
import time
for d in range(1,15):
break # TMP :)
print(f"DAY {d}")
start_time = time.time()
data=generate_data(entities,month=1, day=d)
print(f"## GENERATED - {time.time() - start_time} s")
start_time = time.time()
spark.createDataFrame(data).write.format("delta").mode("append").partitionBy('month_year').save("./dataset/all")
print(f"## DELTA CREATED - {time.time() - start_time} s")
###Output
DAY 1
## GENERATED - 1.9863653182983398 s
## DELTA CREATED - 118.46784734725952 s
DAY 2
## GENERATED - 2.2533488273620605 s
## DELTA CREATED - 113.56314516067505 s
DAY 3
## GENERATED - 2.090444326400757 s
## DELTA CREATED - 117.54949474334717 s
DAY 4
## GENERATED - 2.137775421142578 s
## DELTA CREATED - 113.69700503349304 s
DAY 5
## GENERATED - 2.0107674598693848 s
## DELTA CREATED - 112.49230170249939 s
DAY 6
## GENERATED - 2.04490327835083 s
## DELTA CREATED - 116.83132553100586 s
DAY 7
## GENERATED - 2.12314772605896 s
## DELTA CREATED - 114.3579614162445 s
DAY 8
## GENERATED - 2.1742141246795654 s
## DELTA CREATED - 115.68657755851746 s
DAY 9
## GENERATED - 2.001004695892334 s
## DELTA CREATED - 112.91505312919617 s
DAY 10
## GENERATED - 2.1537675857543945 s
## DELTA CREATED - 113.79394125938416 s
DAY 11
## GENERATED - 2.077458620071411 s
## DELTA CREATED - 116.54374861717224 s
DAY 12
## GENERATED - 2.2862818241119385 s
## DELTA CREATED - 119.25584959983826 s
DAY 13
## GENERATED - 2.121596336364746 s
## DELTA CREATED - 116.48291659355164 s
DAY 14
## GENERATED - 2.0689780712127686 s
## DELTA CREATED - 114.06461930274963 s
###Markdown
Delta Lake history
###Code
from delta.tables import *
deltaTable = DeltaTable.forPath(spark, "./dataset/all")
fullHistoryDF = deltaTable.history()
fullHistoryDF.show()
###Output
+-------+--------------------+------+--------+---------+--------------------+----+--------+---------+-----------+--------------+-------------+--------------------+------------+--------------------+
|version| timestamp|userId|userName|operation| operationParameters| job|notebook|clusterId|readVersion|isolationLevel|isBlindAppend| operationMetrics|userMetadata| engineInfo|
+-------+--------------------+------+--------+---------+--------------------+----+--------+---------+-----------+--------------+-------------+--------------------+------------+--------------------+
| 13|2022-02-11 01:08:...| null| null| WRITE|{mode -> Append, ...|null| null| null| 12| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
| 12|2022-02-11 01:06:...| null| null| WRITE|{mode -> Append, ...|null| null| null| 11| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
| 11|2022-02-11 01:04:...| null| null| WRITE|{mode -> Append, ...|null| null| null| 10| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
| 10|2022-02-11 01:02:...| null| null| WRITE|{mode -> Append, ...|null| null| null| 9| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
| 9|2022-02-11 01:00:...| null| null| WRITE|{mode -> Append, ...|null| null| null| 8| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
| 8|2022-02-11 00:58:...| null| null| WRITE|{mode -> Append, ...|null| null| null| 7| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
| 7|2022-02-11 00:56:...| null| null| WRITE|{mode -> Append, ...|null| null| null| 6| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
| 6|2022-02-11 00:54:...| null| null| WRITE|{mode -> Append, ...|null| null| null| 5| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
| 5|2022-02-11 00:52:...| null| null| WRITE|{mode -> Append, ...|null| null| null| 4| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
| 4|2022-02-11 00:50:...| null| null| WRITE|{mode -> Append, ...|null| null| null| 3| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
| 3|2022-02-11 00:48:...| null| null| WRITE|{mode -> Append, ...|null| null| null| 2| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
| 2|2022-02-11 00:46:...| null| null| WRITE|{mode -> Append, ...|null| null| null| 1| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
| 1|2022-02-11 00:44:...| null| null| WRITE|{mode -> Append, ...|null| null| null| 0| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
| 0|2022-02-11 00:42:...| null| null| WRITE|{mode -> Append, ...|null| null| null| null| Serializable| true|{numFiles -> 12, ...| null|Apache-Spark/3.2....|
+-------+--------------------+------+--------+---------+--------------------+----+--------+---------+-----------+--------------+-------------+--------------------+------------+--------------------+
###Markdown
Feast Apply
###Code
!rm -r .ipynb_checkpoints
from feast.repo_operations import apply_total
from feast.repo_config import load_repo_config
from pathlib import Path
repo = Path('/home/jovyan/feast-pyspark/feature_repo/')
repo_config = load_repo_config(repo)
apply_total(repo_config, repo, True)
import pyspark
edf = entity_df[entity_df.entity_id<=500]
edf = spark.createDataFrame(edf)
from feast import FeatureStore
import pandas as pd
import time
from feast_pyspark import SparkOfflineStore
store = FeatureStore(repo_path=".")
start_time = time.time()
training_df = store.get_historical_features(
entity_df=edf,
features = [
'my_statistics:f0',
'my_statistics:f1',
'my_statistics:f2',
'my_statistics:f3',
'my_statistics:f4',
'my_statistics:f5',
'my_statistics:f6',
'my_statistics:f7',
'my_statistics:f8',
'my_statistics:f9',
'my_statistics:y',
],
).to_df()
print("--- %s seconds ---" % (time.time() - start_time))
training_df
from feast import utils
from datetime import datetime
from feast import FeatureStore
from feast_pyspark import SparkOfflineStore
start_date=utils.make_tzaware(datetime.fromisoformat('2020-01-03T14:30:00'))
end_date=utils.make_tzaware(datetime.fromisoformat('2023-01-03T14:30:00'))
store = FeatureStore(repo_path=".")
store.materialize(start_date=start_date,end_date=end_date)
from pprint import pprint
from feast import FeatureStore
store = FeatureStore(repo_path=".")
feature_vector = store.get_online_features(
features=[
"my_statistics:f1",
"my_statistics:f2",
"my_statistics:f3",
],
entity_rows=[
{"entity_id": 1004},
{"entity_id": 1005},
],
).to_df()
feature_vector
###Output
_____no_output_____ |
day03/solution.ipynb | ###Markdown
Day 3: Binary Diagnostic* generate two new binary numbers (called the gamma rate and the epsilon rate)* Use the binary numbers in your diagnostic report to calculate the gamma rate and epsilon rate, then multiply them together. What is the power consumption of the submarine?
###Code
with open('input') as f:
inpts = [n.strip() for n in f.readlines()]
###Output
_____no_output_____
###Markdown
part 1* Each bit in the gamma rate can be determined by finding the most common bit in the corresponding position of all numbers in the diagnostic report. * The epsilon rate is calculated using the least common bit from each position.
###Code
gamma = epsilon = ''
#for each place in the byte, cycle through each number and get a list of their values
#then assign the appropriate value in that place for gamma and epsilon
for i in range(len(inpts[0])):
digit = [n[i] for n in inpts]
if digit.count('0') > digit.count('1'):
gamma += '0'
epsilon += '1'
else:
gamma += '1'
epsilon += '0'
print('part 1: ', int(gamma,2) * int(epsilon,2))
###Output
part 1: 749376
###Markdown
part 2* Keep only numbers selected by the bit criteria for the type of rating value for which you are searching. Discard numbers which do not match the bit criteria.* If you only have one number left, stop; this is the rating value for which you are searching.* Otherwise, repeat the process, considering the next bit to the right.The bit criteria depends on which type of rating value you want to find:* To find oxygen generator rating, determine the most common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 1 in the position being considered.* To find CO2 scrubber rating, determine the least common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 0 in the position being considered.
###Code
def num_selector(nums, most_common):
new_nums = []
place = 0
#starting at the beginning, determine which value is more common
#if we want the less common value, invert it
#then make a list of all the numbers with that digit at the current place
#then the process starts over, looking at the next place. this will run for as long as there is more than 1 number left
while len(nums) > 1:
digit = [n[place] for n in nums]
if digit.count('1') >= digit.count('0'):
val = 1
else:
val = 0
if not most_common: val = abs(val -1)
new_nums = [nums[m] for m in range(len(digit)) if int(digit[m]) == val]
nums = new_nums
place += 1
return nums[0]
o2 = num_selector(inpts, True)
co2 = num_selector(inpts, False)
print('part 2: ' , int(o2,2) * int(co2,2))
###Output
part 2: 2372923
###Markdown
Day 3 Part I处理输入,仍然使用Pandas简化代码量,读取输入文件后获得一个Numpy数组,方便后续计算:
###Code
import numpy as np
import pandas as pd
def read_input() -> np.ndarray:
df = pd.read_csv('input.txt', header=None)
# 将读取获得的唯一一列字符串打散成每个字符多个列
df = df[0].apply(lambda x: pd.Series(list(x)))
# 返回底层Numpy数组
return df.to_numpy()
###Output
_____no_output_____
###Markdown
定义一个处理矩阵的函数,因为地图可以无限向右扩展,因此,c每次需要模除矩阵的列数:
###Code
def part1_solution(tree_map: np.ndarray) -> int:
rows, cols = tree_map.shape
c = 0
tree_count = 0
for r in range(rows):
c = c % cols
if tree_map[r, c] == '#':
tree_count += 1
c += 3
return tree_count
tree_map = read_input()
part1_solution(tree_map)
###Output
_____no_output_____
###Markdown
Part II事实上,第二部分只需要在第一部分上修改即可,为了清晰起见,这里定义了一个新的函数来进行计算:
###Code
from typing import Tuple
def part2_solution(tree_map: np.ndarray, slope: Tuple[int, int]) -> int:
# 接受一个新的参数,斜率,是一个元组,作为向右前进和向下前进的步长
rows, cols = tree_map.shape
c = 0
tree_count = 0
for r in range(0, rows, slope[1]):
c = c % cols
if tree_map[r, c] == '#':
tree_count += 1
c += slope[0]
return tree_count
###Output
_____no_output_____
###Markdown
将第一部分结果作为一个简单的测试:
###Code
assert(part2_solution(tree_map, (3, 1)) == 247)
###Output
_____no_output_____
###Markdown
使用reduce来计算所有斜率会碰到树的乘积:
###Code
from functools import reduce
reduce(lambda x, y: x * y, (part2_solution(tree_map, s)
for s in [(1, 1), (3, 1), (5, 1), (7, 1), (1, 2)]), 1)
###Output
_____no_output_____ |
docs/notebooks/tutorial/logit_nested.ipynb | ###Markdown
Logit and Nested Logit Tutorial
###Code
import pyblp
import numpy as np
import pandas as pd
pyblp.options.digits = 2
pyblp.options.verbose = False
pyblp.__version__
###Output
_____no_output_____
###Markdown
In this tutorial, we'll use data from :ref:`references:Nevo (2000)` to solve the paper's fake cereal problem. Locations of CSV files that contain the data are in the :mod:`data` module.We will compare two simple models, the plain (IIA) logit model and the nested logit (GEV) model using the fake cereal dataset of :ref:`references:Nevo (2000)`. Theory of Plain LogitLet's start with the plain logit model under independence of irrelevant alternatives (IIA). In this model (indirect) utility is given by$$U_{jti} = \alpha p_{jt} + x_{jt} \beta^x + \xi_{jt} + \epsilon_{jti},$$where $\epsilon_{jti}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. It is common to normalize the mean utility of the outside good to zero so that $U_{0ti} = \epsilon_{0ti}$. This gives us aggregate marketshares$$s_{jt} = \frac{\exp(\alpha p_{jt} + x_{jt} \beta^x + \xi_{jt})}{1 + \sum_k \exp(\alpha p_{jt} + x_{kt} \beta^x + \xi_{kt})}.$$If we take logs we get$$\log s_{jt} = \alpha p_{jt} + x_{jt} \beta^x + \xi_{jt} - 0 - \log \sum_k \exp(\alpha p_{jt} + x_{kt} \beta^x + \xi_{kt})$$and$$\log s_{0t} = 0 - \log \sum_k \exp(\alpha p_{jt} + x_{kt} \beta^x + \xi_{kt}).$$By differencing the above we get a linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt} + x_{jt} \beta^x + \xi_{jt}.$$Because the left hand side is data, we can estimate this model using linear IV GMM. Application of Plain LogitA Logit :class:`Problem` can be created by simply excluding the formulation for the nonlinear parameters, $X_2$, along with any agent information. In other words, it requires only specifying the _linear component_ of demand.We'll set up and solve a simple version of the fake data cereal problem from :ref:`references:Nevo (2000)`. Since we won't include any nonlinear characteristics or parameters, we don't have to worry about configuring an optimization routine.There are some important reserved variable names:- `market_ids` are the unique market identifiers which we subscript with $t$.- `shares` specifies the marketshares which need to be between zero and one, and within a market ID, $\sum_{j} s_{jt} \leq 1$.- `prices` are prices $p_{jt}$. These have some special properties and are _always_ treated as endogenous.- `demand_instruments0`, `demand_instruments1`, and so on are numbered demand instruments. These represent only the _excluded_ instruments. The exogenous regressors in $X_1$ will be automatically added to the set of instruments.We begin with two steps:1. Load the product data which at a minimum consists of `market_ids`, `shares`, `prices`, and at least a single column of demand instruments, `demand_instruments0`.2. Define a :class:`Formulation` for the $X_1$ (linear) demand model. - This and all other formulas are similar to R and [patsy](https://patsy.readthedocs.io/en/stable/) formulas. - It includes a constant by default. To exclude the constant, specify either a `0` or a `-1`. - To efficiently include fixed effects, use the `absorb` option and specify which categorical variables you would like to absorb. - Some model reduction may happen automatically. The constant will be excluded if you include fixed effects and some precautions are taken against collinearity. However, you will have to make sure that differently-named variables are not collinear. 3. Combine the :class:`Formulation` and product data to construct a :class:`Problem`.4. Use :meth:`Problem.solve` to estimate paramters. Loading the DataThe `product_data` argument of :class:`Problem` should be a structured array-like object with fields that store data. Product data can be a structured [NumPy](https://www.numpy.org/) array, a [pandas](https://pandas.pydata.org/) DataFrame, or other similar objects.
###Code
product_data = pd.read_csv(pyblp.data.NEVO_PRODUCTS_LOCATION)
product_data.head()
###Output
_____no_output_____
###Markdown
The product data contains `market_ids`, `product_ids`, `firm_ids`, `shares`, `prices`, a number of other IDs and product characteristics, and some pre-computed excluded `demand_instruments0`, `demand_instruments1`, and so on. The `product_ids` will be incorporated as fixed effects. For more information about the instruments and the example data as a whole, refer to the :mod:`data` module. Setting Up the ProblemWe can combine the :class:`Formulation` and `product_data` to construct a :class:`Problem`. We pass the :class:`Formulation` first and the `product_data` second. We can also display the properties of the problem by typing its name.
###Code
logit_formulation = pyblp.Formulation('prices', absorb='C(product_ids)')
logit_formulation
problem = pyblp.Problem(logit_formulation, product_data)
problem
###Output
_____no_output_____
###Markdown
Two sets of properties are displayed:1. Dimensions of the data.2. Formulations of the problem.The dimensions describe the shapes of matrices as laid out in :ref:`notation:Notation`. They include:- $T$ is the number of markets.- $N$ is the length of the dataset (the number of products across all markets).- $F$ is the number of firms, which we won't use in this example.- $K_1$ is the dimension of the linear demand parameters.- $M_D$ is the dimension of the instrument variables (excluded instruments and exogenous regressors).- $E_D$ is the number of fixed effect dimensions (one-dimensional fixed effects, two-dimensional fixed effects, etc.).There is only a single :class:`Formulation` for this model. - $X_1$ is the linear component of utility for demand and depends only on prices (after the fixed effects are removed). Solving the ProblemThe :meth:`Problem.solve` method always returns a :class:`ProblemResults` class, which can be used to compute post-estimation outputs. See the [post estimation](post_estimation.ipynb) tutorial for more information.
###Code
logit_results = problem.solve()
logit_results
###Output
_____no_output_____
###Markdown
Theory of Nested LogitWe can extend the logit model to allow for correlation within a group $h$ so that$$U_{jti} = \alpha p_{jt} + x_{jt} \beta^x + \xi_{jt} + \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{jti}.$$Now, we require that $\epsilon_{jti} = \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{jti}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. As $\rho \rightarrow 1$, all consumers stay within their group. As $\rho \rightarrow 0$, this collapses to the IIA logit. Note that if we wanted, we could allow $\rho$ to differ between groups with the notation $\rho_{h(j)}$.This gives us aggregate marketshares as the product of two logits, the within group logit and the across group logit:$$s_{jt} = \frac{\exp[V_{jt} / (1 - \rho)]}{\exp[V_{h(j)t} / (1 - \rho)]}\cdot\frac{\exp V_{h(j)t}}{1 + \sum_h \exp V_{ht}},$$where $V_{jt} = \alpha p_{jt} + x_{jt} \beta^x + \xi_{jt}$.After some work we again obtain the linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt}+ x_{jt} \beta^x +\rho \log s_{j|h(j)t} + \xi_{jt},$$where $s_{j|h(j)t} = s_{jt} / s_{h(j)t}$ and $s_{h(j)t}$ is the share of group $h$ in market $t$. See :ref:`references:Berry (1994)` or :ref:`references:Cardell (1997)` for more information.Again, the left hand side is data, though the $\ln s_{j|h(j)t}$ is clearly endogenous which means we must instrument for it. Rather than include $\ln s_{j|h(j)t}$ along with the linear components of utility, $X_1$, whenever `nesting_ids` are included in `product_data`, $\rho$ is treated as a nonlinear $X_2$ parameter. This means that the linear component is given instead by$$\log s_{jt} - \log s_{0t} - \rho \log s_{j|h(j)t} = \alpha p_{jt} + x_{jt} \beta^x + \xi_{jt}.$$This is done for two reasons:1. It forces the user to treat $\rho$ as an endogenous parameter.2. It extends much more easily to the RCNL model of :ref:`references:Brenkers and Verboven (2006)`.A common choice for an additional instrument is the number of products per nest. Application of Nested LogitBy including `nesting_ids` (another reserved name) as a field in `product_data`, we tell the package to estimate a nested logit model, and we don't need to change any of the formulas. We show how to construct the category groupings in two different ways:1. We put all products in a single nest (only the outside good in the other nest).2. We put products into two nests (either mushy or non-mushy).We also construct an additional instrument based on the number of products per nest. Typically this is useful as a source of exogenous variation in the within group share $\ln s_{j|h(j)t}$. However, in this example because the number of products per nest does not vary across markets, if we include product fixed effects, this instrument is irrelevant.We'll define a function that constructs the additional instrument and solves the nested logit problem. We'll exclude product ID fixed effects, which are collinear with `mushy`, and we'll choose $\rho = 0.7$ as the initial value at which the optimization routine will start.
###Code
def solve_nl(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl_formulation = pyblp.Formulation('0 + prices')
problem = pyblp.Problem(nl_formulation, df)
return problem.solve(rho=0.7)
###Output
_____no_output_____
###Markdown
First, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
df1 = product_data.copy()
df1['nesting_ids'] = 1
nl_results1 = solve_nl(df1)
nl_results1
###Output
_____no_output_____
###Markdown
When we inspect the :class:`Problem`, the only changes from the plain logit model is the additional instrument that contributes to $M_D$ and the inclusion of $H$, the number of nesting categories.
###Code
nl_results1.problem
###Output
_____no_output_____
###Markdown
Next, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
df2 = product_data.copy()
df2['nesting_ids'] = df2['mushy']
nl_results2 = solve_nl(df2)
nl_results2
###Output
_____no_output_____
###Markdown
For both cases we find that $\hat{\rho} > 0.8$.Finally, we'll also look at the adjusted parameter on prices, $\alpha / (1-\rho)$.
###Code
nl_results1.beta[0] / (1 - nl_results1.rho)
nl_results2.beta[0] / (1 - nl_results2.rho)
###Output
_____no_output_____
###Markdown
Treating Within Group Shares as ExogenousThe package is designed to prevent the user from treating the within group share, $\log s_{j|h(j)t}$, as an exogenous variable. For example, if we were to compute a `group_share` variable and use the algebraic functionality of :class:`Formulation` by including the expression `log(shares / group_share)` in our formula for $X_1$, the package would raise an error because the package knows that `shares` should not be included in this formulation.To demonstrate why this is a bad idea, we'll override this feature by calculating $\log s_{j|h(j)t}$ and including it as an additional variable in $X_1$. To do so, we'll first re-define our function for setting up and solving the nested logit problem.
###Code
def solve_nl2(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['group_share'] = groups['shares'].transform(np.sum)
df['within_share'] = df['shares'] / df['group_share']
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl2_formulation = pyblp.Formulation('0 + prices + log(within_share)')
problem = pyblp.Problem(nl2_formulation, df.drop(columns=['nesting_ids']))
return problem.solve()
###Output
_____no_output_____
###Markdown
Again, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
nl2_results1 = solve_nl2(df1)
nl2_results1
###Output
_____no_output_____
###Markdown
And again, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
nl2_results2 = solve_nl2(df2)
nl2_results2
###Output
_____no_output_____
###Markdown
One can observe that we obtain parameter estimates which are quite different than above.
###Code
nl2_results1.beta[0] / (1 - nl2_results1.beta[1])
nl2_results2.beta[0] / (1 - nl2_results2.beta[1])
###Output
_____no_output_____
###Markdown
Logit and Nested Logit Tutorial
###Code
import pyblp
import numpy as np
import pandas as pd
pyblp.options.digits = 2
pyblp.options.verbose = False
pyblp.__version__
###Output
_____no_output_____
###Markdown
In this tutorial, we'll use data from :ref:`references:Nevo (2000)` to solve the paper's fake cereal problem. Locations of CSV files that contain the data are in the :mod:`data` module.We will compare two simple models, the plain (IIA) logit model and the nested logit (GEV) model using the fake cereal dataset of :ref:`references:Nevo (2000)`. Theory of Plain LogitLet's start with the plain logit model under independence of irrelevant alternatives (IIA). In this model (indirect) utility is given by$$U_{jti} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} + \epsilon_{jti},$$where $\epsilon_{jti}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. It is common to normalize the mean utility of the outside good to zero so that $U_{0ti} = \epsilon_{0ti}$. This gives us aggregate marketshares$$s_{jt} = \frac{\exp(\alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt})}{1 + \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt})}.$$If we take logs we get$$\log s_{jt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} - \log \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt})$$and$$\log s_{0t} = -\log \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt}).$$By differencing the above we get a linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}.$$Because the left hand side is data, we can estimate this model using linear IV GMM. Application of Plain LogitA Logit :class:`Problem` can be created by simply excluding the formulation for the nonlinear parameters, $X_2$, along with any agent information. In other words, it requires only specifying the _linear component_ of demand.We'll set up and solve a simple version of the fake data cereal problem from :ref:`references:Nevo (2000)`. Since we won't include any demand-side nonlinear characteristics or parameters, we don't have to worry about configuring an optimization routine.There are some important reserved variable names:- `market_ids` are the unique market identifiers which we subscript with $t$.- `shares` specifies the marketshares which need to be between zero and one, and within a market ID, $\sum_{j} s_{jt} \leq 1$.- `prices` are prices $p_{jt}$. These have some special properties and are _always_ treated as endogenous.- `demand_instruments0`, `demand_instruments1`, and so on are numbered demand instruments. These represent only the _excluded_ instruments. The exogenous regressors in $X_1$ will be automatically added to the set of instruments.We begin with two steps:1. Load the product data which at a minimum consists of `market_ids`, `shares`, `prices`, and at least a single column of demand instruments, `demand_instruments0`.2. Define a :class:`Formulation` for the $X_1$ (linear) demand model. - This and all other formulas are similar to R and [patsy](https://patsy.readthedocs.io/en/stable/) formulas. - It includes a constant by default. To exclude the constant, specify either a `0` or a `-1`. - To efficiently include fixed effects, use the `absorb` option and specify which categorical variables you would like to absorb. - Some model reduction may happen automatically. The constant will be excluded if you include fixed effects and some precautions are taken against collinearity. However, you will have to make sure that differently-named variables are not collinear. 3. Combine the :class:`Formulation` and product data to construct a :class:`Problem`.4. Use :meth:`Problem.solve` to estimate paramters. Loading the DataThe `product_data` argument of :class:`Problem` should be a structured array-like object with fields that store data. Product data can be a structured [NumPy](https://numpy.org/) array, a [pandas](https://pandas.pydata.org/) DataFrame, or other similar objects.
###Code
product_data = pd.read_csv(pyblp.data.NEVO_PRODUCTS_LOCATION)
product_data.head()
###Output
_____no_output_____
###Markdown
The product data contains `market_ids`, `product_ids`, `firm_ids`, `shares`, `prices`, a number of other IDs and product characteristics, and some pre-computed excluded `demand_instruments0`, `demand_instruments1`, and so on. The `product_ids` will be incorporated as fixed effects. For more information about the instruments and the example data as a whole, refer to the :mod:`data` module. Setting Up the ProblemWe can combine the :class:`Formulation` and `product_data` to construct a :class:`Problem`. We pass the :class:`Formulation` first and the `product_data` second. We can also display the properties of the problem by typing its name.
###Code
logit_formulation = pyblp.Formulation('prices', absorb='C(product_ids)')
logit_formulation
problem = pyblp.Problem(logit_formulation, product_data)
problem
###Output
_____no_output_____
###Markdown
Two sets of properties are displayed:1. Dimensions of the data.2. Formulations of the problem.The dimensions describe the shapes of matrices as laid out in :ref:`notation:Notation`. They include:- $T$ is the number of markets.- $N$ is the length of the dataset (the number of products across all markets).- $F$ is the number of firms, which we won't use in this example.- $K_1$ is the dimension of the linear demand parameters.- $M_D$ is the dimension of the instrument variables (excluded instruments and exogenous regressors).- $E_D$ is the number of fixed effect dimensions (one-dimensional fixed effects, two-dimensional fixed effects, etc.).There is only a single :class:`Formulation` for this model. - $X_1$ is the linear component of utility for demand and depends only on prices (after the fixed effects are removed). Solving the ProblemThe :meth:`Problem.solve` method always returns a :class:`ProblemResults` class, which can be used to compute post-estimation outputs. See the [post estimation](post_estimation.ipynb) tutorial for more information.
###Code
logit_results = problem.solve()
logit_results
###Output
_____no_output_____
###Markdown
Theory of Nested LogitWe can extend the logit model to allow for correlation within a group $h$ so that$$U_{jti} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} + \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{jti}.$$Now, we require that $\epsilon_{jti} = \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{jti}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. As $\rho \rightarrow 1$, all consumers stay within their group. As $\rho \rightarrow 0$, this collapses to the IIA logit. Note that if we wanted, we could allow $\rho$ to differ between groups with the notation $\rho_{h(j)}$.This gives us aggregate marketshares as the product of two logits, the within group logit and the across group logit:$$s_{jt} = \frac{\exp[V_{jt} / (1 - \rho)]}{\exp[V_{h(j)t} / (1 - \rho)]}\cdot\frac{\exp V_{h(j)t}}{1 + \sum_h \exp V_{ht}},$$where $V_{jt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}$.After some work we again obtain the linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt}+ x_{jt} \beta^\text{ex} +\rho \log s_{j|h(j)t} + \xi_{jt},$$where $s_{j|h(j)t} = s_{jt} / s_{h(j)t}$ and $s_{h(j)t}$ is the share of group $h$ in market $t$. See :ref:`references:Berry (1994)` or :ref:`references:Cardell (1997)` for more information.Again, the left hand side is data, though the $\ln s_{j|h(j)t}$ is clearly endogenous which means we must instrument for it. Rather than include $\ln s_{j|h(j)t}$ along with the linear components of utility, $X_1$, whenever `nesting_ids` are included in `product_data`, $\rho$ is treated as a nonlinear $X_2$ parameter. This means that the linear component is given instead by$$\log s_{jt} - \log s_{0t} - \rho \log s_{j|h(j)t} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}.$$This is done for two reasons:1. It forces the user to treat $\rho$ as an endogenous parameter.2. It extends much more easily to the RCNL model of :ref:`references:Brenkers and Verboven (2006)`.A common choice for an additional instrument is the number of products per nest. Application of Nested LogitBy including `nesting_ids` (another reserved name) as a field in `product_data`, we tell the package to estimate a nested logit model, and we don't need to change any of the formulas. We show how to construct the category groupings in two different ways:1. We put all products in a single nest (only the outside good in the other nest).2. We put products into two nests (either mushy or non-mushy).We also construct an additional instrument based on the number of products per nest. Typically this is useful as a source of exogenous variation in the within group share $\ln s_{j|h(j)t}$. However, in this example because the number of products per nest does not vary across markets, if we include product fixed effects, this instrument is irrelevant.We'll define a function that constructs the additional instrument and solves the nested logit problem. We'll exclude product ID fixed effects, which are collinear with `mushy`, and we'll choose $\rho = 0.7$ as the initial value at which the optimization routine will start.
###Code
def solve_nl(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl_formulation = pyblp.Formulation('0 + prices')
problem = pyblp.Problem(nl_formulation, df)
return problem.solve(rho=0.7)
###Output
_____no_output_____
###Markdown
First, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
df1 = product_data.copy()
df1['nesting_ids'] = 1
nl_results1 = solve_nl(df1)
nl_results1
###Output
_____no_output_____
###Markdown
When we inspect the :class:`Problem`, the only changes from the plain logit model is the additional instrument that contributes to $M_D$ and the inclusion of $H$, the number of nesting categories.
###Code
nl_results1.problem
###Output
_____no_output_____
###Markdown
Next, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
df2 = product_data.copy()
df2['nesting_ids'] = df2['mushy']
nl_results2 = solve_nl(df2)
nl_results2
###Output
_____no_output_____
###Markdown
For both cases we find that $\hat{\rho} > 0.8$.Finally, we'll also look at the adjusted parameter on prices, $\alpha / (1-\rho)$.
###Code
nl_results1.beta[0] / (1 - nl_results1.rho)
nl_results2.beta[0] / (1 - nl_results2.rho)
###Output
_____no_output_____
###Markdown
Treating Within Group Shares as ExogenousThe package is designed to prevent the user from treating the within group share, $\log s_{j|h(j)t}$, as an exogenous variable. For example, if we were to compute a `group_share` variable and use the algebraic functionality of :class:`Formulation` by including the expression `log(shares / group_share)` in our formula for $X_1$, the package would raise an error because the package knows that `shares` should not be included in this formulation.To demonstrate why this is a bad idea, we'll override this feature by calculating $\log s_{j|h(j)t}$ and including it as an additional variable in $X_1$. To do so, we'll first re-define our function for setting up and solving the nested logit problem.
###Code
def solve_nl2(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['group_share'] = groups['shares'].transform(np.sum)
df['within_share'] = df['shares'] / df['group_share']
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl2_formulation = pyblp.Formulation('0 + prices + log(within_share)')
problem = pyblp.Problem(nl2_formulation, df.drop(columns=['nesting_ids']))
return problem.solve()
###Output
_____no_output_____
###Markdown
Again, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
nl2_results1 = solve_nl2(df1)
nl2_results1
###Output
_____no_output_____
###Markdown
And again, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
nl2_results2 = solve_nl2(df2)
nl2_results2
###Output
_____no_output_____
###Markdown
One can observe that we obtain parameter estimates which are quite different than above.
###Code
nl2_results1.beta[0] / (1 - nl2_results1.beta[1])
nl2_results2.beta[0] / (1 - nl2_results2.beta[1])
###Output
_____no_output_____
###Markdown
Logit and Nested Logit Tutorial
###Code
import pyblp
import numpy as np
import pandas as pd
pyblp.options.digits = 2
pyblp.options.verbose = False
pyblp.__version__
###Output
_____no_output_____
###Markdown
In this tutorial, we'll use data from :ref:`references:Nevo (2000)` to solve the paper's fake cereal problem. Locations of CSV files that contain the data are in the :mod:`data` module.We will compare two simple models, the plain (IIA) logit model and the nested logit (GEV) model using the fake cereal dataset of :ref:`references:Nevo (2000)`. Theory of Plain LogitLet's start with the plain logit model under independence of irrelevant alternatives (IIA). In this model (indirect) utility is given by$$U_{ijt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} + \epsilon_{ijt},$$where $\epsilon_{ijt}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. It is common to normalize the mean utility of the outside good to zero so that $U_{i0t} = \epsilon_{i0t}$. This gives us aggregate market shares$$s_{jt} = \frac{\exp(\alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt})}{1 + \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt})}.$$If we take logs we get$$\log s_{jt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} - \log \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt})$$and$$\log s_{0t} = -\log \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt}).$$By differencing the above we get a linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}.$$Because the left hand side is data, we can estimate this model using linear IV GMM. Application of Plain LogitA Logit :class:`Problem` can be created by simply excluding the formulation for the nonlinear parameters, $X_2$, along with any agent information. In other words, it requires only specifying the _linear component_ of demand.We'll set up and solve a simple version of the fake data cereal problem from :ref:`references:Nevo (2000)`. Since we won't include any demand-side nonlinear characteristics or parameters, we don't have to worry about configuring an optimization routine.There are some important reserved variable names:- `market_ids` are the unique market identifiers which we subscript with $t$.- `shares` specifies the market shares which need to be between zero and one, and within a market ID, $\sum_{j} s_{jt} \leq 1$.- `prices` are prices $p_{jt}$. These have some special properties and are _always_ treated as endogenous.- `demand_instruments0`, `demand_instruments1`, and so on are numbered demand instruments. These represent only the _excluded_ instruments. The exogenous regressors in $X_1$ will be automatically added to the set of instruments.We begin with two steps:1. Load the product data which at a minimum consists of `market_ids`, `shares`, `prices`, and at least a single column of demand instruments, `demand_instruments0`.2. Define a :class:`Formulation` for the $X_1$ (linear) demand model. - This and all other formulas are similar to R and [patsy](https://patsy.readthedocs.io/en/stable/) formulas. - It includes a constant by default. To exclude the constant, specify either a `0` or a `-1`. - To efficiently include fixed effects, use the `absorb` option and specify which categorical variables you would like to absorb. - Some model reduction may happen automatically. The constant will be excluded if you include fixed effects and some precautions are taken against collinearity. However, you will have to make sure that differently-named variables are not collinear. 3. Combine the :class:`Formulation` and product data to construct a :class:`Problem`.4. Use :meth:`Problem.solve` to estimate paramters. Loading the DataThe `product_data` argument of :class:`Problem` should be a structured array-like object with fields that store data. Product data can be a structured [NumPy](https://numpy.org/) array, a [pandas](https://pandas.pydata.org/) DataFrame, or other similar objects.
###Code
product_data = pd.read_csv(pyblp.data.NEVO_PRODUCTS_LOCATION)
product_data.head()
###Output
_____no_output_____
###Markdown
The product data contains `market_ids`, `product_ids`, `firm_ids`, `shares`, `prices`, a number of other IDs and product characteristics, and some pre-computed excluded `demand_instruments0`, `demand_instruments1`, and so on. The `product_ids` will be incorporated as fixed effects. For more information about the instruments and the example data as a whole, refer to the :mod:`data` module. Setting Up the ProblemWe can combine the :class:`Formulation` and `product_data` to construct a :class:`Problem`. We pass the :class:`Formulation` first and the `product_data` second. We can also display the properties of the problem by typing its name.
###Code
logit_formulation = pyblp.Formulation('prices', absorb='C(product_ids)')
logit_formulation
problem = pyblp.Problem(logit_formulation, product_data)
problem
###Output
_____no_output_____
###Markdown
Two sets of properties are displayed:1. Dimensions of the data.2. Formulations of the problem.The dimensions describe the shapes of matrices as laid out in :ref:`notation:Notation`. They include:- $T$ is the number of markets.- $N$ is the length of the dataset (the number of products across all markets).- $F$ is the number of firms, which we won't use in this example.- $K_1$ is the dimension of the linear demand parameters.- $M_D$ is the dimension of the instrument variables (excluded instruments and exogenous regressors).- $E_D$ is the number of fixed effect dimensions (one-dimensional fixed effects, two-dimensional fixed effects, etc.).There is only a single :class:`Formulation` for this model. - $X_1$ is the linear component of utility for demand and depends only on prices (after the fixed effects are removed). Solving the ProblemThe :meth:`Problem.solve` method always returns a :class:`ProblemResults` class, which can be used to compute post-estimation outputs. See the [post estimation](post_estimation.ipynb) tutorial for more information.
###Code
logit_results = problem.solve()
logit_results
###Output
_____no_output_____
###Markdown
Theory of Nested LogitWe can extend the logit model to allow for correlation within a group $h$ so that$$U_{ijt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} + \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{ijt}.$$Now, we require that $\epsilon_{jti} = \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{jti}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. As $\rho \rightarrow 1$, all consumers stay within their group. As $\rho \rightarrow 0$, this collapses to the IIA logit. Note that if we wanted, we could allow $\rho$ to differ between groups with the notation $\rho_{h(j)}$.This gives us aggregate market shares as the product of two logits, the within group logit and the across group logit:$$s_{jt} = \frac{\exp[V_{jt} / (1 - \rho)]}{\exp[V_{h(j)t} / (1 - \rho)]}\cdot\frac{\exp V_{h(j)t}}{1 + \sum_h \exp V_{ht}},$$where $V_{jt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}$.After some work we again obtain the linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt}+ x_{jt} \beta^\text{ex} +\rho \log s_{j|h(j)t} + \xi_{jt},$$where $s_{j|h(j)t} = s_{jt} / s_{h(j)t}$ and $s_{h(j)t}$ is the share of group $h$ in market $t$. See :ref:`references:Berry (1994)` or :ref:`references:Cardell (1997)` for more information.Again, the left hand side is data, though the $\ln s_{j|h(j)t}$ is clearly endogenous which means we must instrument for it. Rather than include $\ln s_{j|h(j)t}$ along with the linear components of utility, $X_1$, whenever `nesting_ids` are included in `product_data`, $\rho$ is treated as a nonlinear $X_2$ parameter. This means that the linear component is given instead by$$\log s_{jt} - \log s_{0t} - \rho \log s_{j|h(j)t} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}.$$This is done for two reasons:1. It forces the user to treat $\rho$ as an endogenous parameter.2. It extends much more easily to the RCNL model of :ref:`references:Brenkers and Verboven (2006)`.A common choice for an additional instrument is the number of products per nest. Application of Nested LogitBy including `nesting_ids` (another reserved name) as a field in `product_data`, we tell the package to estimate a nested logit model, and we don't need to change any of the formulas. We show how to construct the category groupings in two different ways:1. We put all products in a single nest (only the outside good in the other nest).2. We put products into two nests (either mushy or non-mushy).We also construct an additional instrument based on the number of products per nest. Typically this is useful as a source of exogenous variation in the within group share $\ln s_{j|h(j)t}$. However, in this example because the number of products per nest does not vary across markets, if we include product fixed effects, this instrument is irrelevant.We'll define a function that constructs the additional instrument and solves the nested logit problem. We'll exclude product ID fixed effects, which are collinear with `mushy`, and we'll choose $\rho = 0.7$ as the initial value at which the optimization routine will start.
###Code
def solve_nl(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl_formulation = pyblp.Formulation('0 + prices')
problem = pyblp.Problem(nl_formulation, df)
return problem.solve(rho=0.7)
###Output
_____no_output_____
###Markdown
First, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
df1 = product_data.copy()
df1['nesting_ids'] = 1
nl_results1 = solve_nl(df1)
nl_results1
###Output
_____no_output_____
###Markdown
When we inspect the :class:`Problem`, the only changes from the plain logit model is the additional instrument that contributes to $M_D$ and the inclusion of $H$, the number of nesting categories.
###Code
nl_results1.problem
###Output
_____no_output_____
###Markdown
Next, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
df2 = product_data.copy()
df2['nesting_ids'] = df2['mushy']
nl_results2 = solve_nl(df2)
nl_results2
###Output
_____no_output_____
###Markdown
For both cases we find that $\hat{\rho} > 0.8$.Finally, we'll also look at the adjusted parameter on prices, $\alpha / (1-\rho)$.
###Code
nl_results1.beta[0] / (1 - nl_results1.rho)
nl_results2.beta[0] / (1 - nl_results2.rho)
###Output
_____no_output_____
###Markdown
Treating Within Group Shares as ExogenousThe package is designed to prevent the user from treating the within group share, $\log s_{j|h(j)t}$, as an exogenous variable. For example, if we were to compute a `group_share` variable and use the algebraic functionality of :class:`Formulation` by including the expression `log(shares / group_share)` in our formula for $X_1$, the package would raise an error because the package knows that `shares` should not be included in this formulation.To demonstrate why this is a bad idea, we'll override this feature by calculating $\log s_{j|h(j)t}$ and including it as an additional variable in $X_1$. To do so, we'll first re-define our function for setting up and solving the nested logit problem.
###Code
def solve_nl2(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['group_share'] = groups['shares'].transform(np.sum)
df['within_share'] = df['shares'] / df['group_share']
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl2_formulation = pyblp.Formulation('0 + prices + log(within_share)')
problem = pyblp.Problem(nl2_formulation, df.drop(columns=['nesting_ids']))
return problem.solve()
###Output
_____no_output_____
###Markdown
Again, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
nl2_results1 = solve_nl2(df1)
nl2_results1
###Output
_____no_output_____
###Markdown
And again, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
nl2_results2 = solve_nl2(df2)
nl2_results2
###Output
_____no_output_____
###Markdown
One can observe that we obtain parameter estimates which are quite different than above.
###Code
nl2_results1.beta[0] / (1 - nl2_results1.beta[1])
nl2_results2.beta[0] / (1 - nl2_results2.beta[1])
###Output
_____no_output_____
###Markdown
Logit and Nested Logit Tutorial
###Code
import pyblp
import numpy as np
import pandas as pd
pyblp.options.digits = 2
pyblp.options.verbose = False
pyblp.__version__
###Output
_____no_output_____
###Markdown
In this tutorial, we'll use data from :ref:`references:Nevo (2000)` to solve the paper's fake cereal problem. Locations of CSV files that contain the data are in the :mod:`data` module.We will compare two simple models, the plain (IIA) logit model and the nested logit (GEV) model using the fake cereal dataset of :ref:`references:Nevo (2000)`. Theory of Plain LogitLet's start with the plain logit model under independence of irrelevant alternatives (IIA). In this model (indirect) utility is given by$$U_{jti} = \alpha p_{jt} + x_{jt} \beta^x + \xi_{jt} + \epsilon_{jti},$$where $\epsilon_{jti}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. It is common to normalize the mean utility of the outside good to zero so that $U_{0ti} = \epsilon_{0ti}$. This gives us aggregate marketshares$$s_{jt} = \frac{\exp(\alpha p_{jt} + x_{jt} \beta^x + \xi_{jt})}{1 + \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^x + \xi_{kt})}.$$If we take logs we get$$\log s_{jt} = \alpha p_{jt} + x_{jt} \beta^x + \xi_{jt} - \log \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^x + \xi_{kt})$$and$$\log s_{0t} = -\log \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^x + \xi_{kt}).$$By differencing the above we get a linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt} + x_{jt} \beta^x + \xi_{jt}.$$Because the left hand side is data, we can estimate this model using linear IV GMM. Application of Plain LogitA Logit :class:`Problem` can be created by simply excluding the formulation for the nonlinear parameters, $X_2$, along with any agent information. In other words, it requires only specifying the _linear component_ of demand.We'll set up and solve a simple version of the fake data cereal problem from :ref:`references:Nevo (2000)`. Since we won't include any nonlinear characteristics or parameters, we don't have to worry about configuring an optimization routine.There are some important reserved variable names:- `market_ids` are the unique market identifiers which we subscript with $t$.- `shares` specifies the marketshares which need to be between zero and one, and within a market ID, $\sum_{j} s_{jt} \leq 1$.- `prices` are prices $p_{jt}$. These have some special properties and are _always_ treated as endogenous.- `demand_instruments0`, `demand_instruments1`, and so on are numbered demand instruments. These represent only the _excluded_ instruments. The exogenous regressors in $X_1$ will be automatically added to the set of instruments.We begin with two steps:1. Load the product data which at a minimum consists of `market_ids`, `shares`, `prices`, and at least a single column of demand instruments, `demand_instruments0`.2. Define a :class:`Formulation` for the $X_1$ (linear) demand model. - This and all other formulas are similar to R and [patsy](https://patsy.readthedocs.io/en/stable/) formulas. - It includes a constant by default. To exclude the constant, specify either a `0` or a `-1`. - To efficiently include fixed effects, use the `absorb` option and specify which categorical variables you would like to absorb. - Some model reduction may happen automatically. The constant will be excluded if you include fixed effects and some precautions are taken against collinearity. However, you will have to make sure that differently-named variables are not collinear. 3. Combine the :class:`Formulation` and product data to construct a :class:`Problem`.4. Use :meth:`Problem.solve` to estimate paramters. Loading the DataThe `product_data` argument of :class:`Problem` should be a structured array-like object with fields that store data. Product data can be a structured [NumPy](https://numpy.org/) array, a [pandas](https://pandas.pydata.org/) DataFrame, or other similar objects.
###Code
product_data = pd.read_csv(pyblp.data.NEVO_PRODUCTS_LOCATION)
product_data.head()
###Output
_____no_output_____
###Markdown
The product data contains `market_ids`, `product_ids`, `firm_ids`, `shares`, `prices`, a number of other IDs and product characteristics, and some pre-computed excluded `demand_instruments0`, `demand_instruments1`, and so on. The `product_ids` will be incorporated as fixed effects. For more information about the instruments and the example data as a whole, refer to the :mod:`data` module. Setting Up the ProblemWe can combine the :class:`Formulation` and `product_data` to construct a :class:`Problem`. We pass the :class:`Formulation` first and the `product_data` second. We can also display the properties of the problem by typing its name.
###Code
logit_formulation = pyblp.Formulation('prices', absorb='C(product_ids)')
logit_formulation
problem = pyblp.Problem(logit_formulation, product_data)
problem
###Output
_____no_output_____
###Markdown
Two sets of properties are displayed:1. Dimensions of the data.2. Formulations of the problem.The dimensions describe the shapes of matrices as laid out in :ref:`notation:Notation`. They include:- $T$ is the number of markets.- $N$ is the length of the dataset (the number of products across all markets).- $F$ is the number of firms, which we won't use in this example.- $K_1$ is the dimension of the linear demand parameters.- $M_D$ is the dimension of the instrument variables (excluded instruments and exogenous regressors).- $E_D$ is the number of fixed effect dimensions (one-dimensional fixed effects, two-dimensional fixed effects, etc.).There is only a single :class:`Formulation` for this model. - $X_1$ is the linear component of utility for demand and depends only on prices (after the fixed effects are removed). Solving the ProblemThe :meth:`Problem.solve` method always returns a :class:`ProblemResults` class, which can be used to compute post-estimation outputs. See the [post estimation](post_estimation.ipynb) tutorial for more information.
###Code
logit_results = problem.solve()
logit_results
###Output
_____no_output_____
###Markdown
Theory of Nested LogitWe can extend the logit model to allow for correlation within a group $h$ so that$$U_{jti} = \alpha p_{jt} + x_{jt} \beta^x + \xi_{jt} + \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{jti}.$$Now, we require that $\epsilon_{jti} = \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{jti}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. As $\rho \rightarrow 1$, all consumers stay within their group. As $\rho \rightarrow 0$, this collapses to the IIA logit. Note that if we wanted, we could allow $\rho$ to differ between groups with the notation $\rho_{h(j)}$.This gives us aggregate marketshares as the product of two logits, the within group logit and the across group logit:$$s_{jt} = \frac{\exp[V_{jt} / (1 - \rho)]}{\exp[V_{h(j)t} / (1 - \rho)]}\cdot\frac{\exp V_{h(j)t}}{1 + \sum_h \exp V_{ht}},$$where $V_{jt} = \alpha p_{jt} + x_{jt} \beta^x + \xi_{jt}$.After some work we again obtain the linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt}+ x_{jt} \beta^x +\rho \log s_{j|h(j)t} + \xi_{jt},$$where $s_{j|h(j)t} = s_{jt} / s_{h(j)t}$ and $s_{h(j)t}$ is the share of group $h$ in market $t$. See :ref:`references:Berry (1994)` or :ref:`references:Cardell (1997)` for more information.Again, the left hand side is data, though the $\ln s_{j|h(j)t}$ is clearly endogenous which means we must instrument for it. Rather than include $\ln s_{j|h(j)t}$ along with the linear components of utility, $X_1$, whenever `nesting_ids` are included in `product_data`, $\rho$ is treated as a nonlinear $X_2$ parameter. This means that the linear component is given instead by$$\log s_{jt} - \log s_{0t} - \rho \log s_{j|h(j)t} = \alpha p_{jt} + x_{jt} \beta^x + \xi_{jt}.$$This is done for two reasons:1. It forces the user to treat $\rho$ as an endogenous parameter.2. It extends much more easily to the RCNL model of :ref:`references:Brenkers and Verboven (2006)`.A common choice for an additional instrument is the number of products per nest. Application of Nested LogitBy including `nesting_ids` (another reserved name) as a field in `product_data`, we tell the package to estimate a nested logit model, and we don't need to change any of the formulas. We show how to construct the category groupings in two different ways:1. We put all products in a single nest (only the outside good in the other nest).2. We put products into two nests (either mushy or non-mushy).We also construct an additional instrument based on the number of products per nest. Typically this is useful as a source of exogenous variation in the within group share $\ln s_{j|h(j)t}$. However, in this example because the number of products per nest does not vary across markets, if we include product fixed effects, this instrument is irrelevant.We'll define a function that constructs the additional instrument and solves the nested logit problem. We'll exclude product ID fixed effects, which are collinear with `mushy`, and we'll choose $\rho = 0.7$ as the initial value at which the optimization routine will start.
###Code
def solve_nl(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl_formulation = pyblp.Formulation('0 + prices')
problem = pyblp.Problem(nl_formulation, df)
return problem.solve(rho=0.7)
###Output
_____no_output_____
###Markdown
First, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
df1 = product_data.copy()
df1['nesting_ids'] = 1
nl_results1 = solve_nl(df1)
nl_results1
###Output
_____no_output_____
###Markdown
When we inspect the :class:`Problem`, the only changes from the plain logit model is the additional instrument that contributes to $M_D$ and the inclusion of $H$, the number of nesting categories.
###Code
nl_results1.problem
###Output
_____no_output_____
###Markdown
Next, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
df2 = product_data.copy()
df2['nesting_ids'] = df2['mushy']
nl_results2 = solve_nl(df2)
nl_results2
###Output
_____no_output_____
###Markdown
For both cases we find that $\hat{\rho} > 0.8$.Finally, we'll also look at the adjusted parameter on prices, $\alpha / (1-\rho)$.
###Code
nl_results1.beta[0] / (1 - nl_results1.rho)
nl_results2.beta[0] / (1 - nl_results2.rho)
###Output
_____no_output_____
###Markdown
Treating Within Group Shares as ExogenousThe package is designed to prevent the user from treating the within group share, $\log s_{j|h(j)t}$, as an exogenous variable. For example, if we were to compute a `group_share` variable and use the algebraic functionality of :class:`Formulation` by including the expression `log(shares / group_share)` in our formula for $X_1$, the package would raise an error because the package knows that `shares` should not be included in this formulation.To demonstrate why this is a bad idea, we'll override this feature by calculating $\log s_{j|h(j)t}$ and including it as an additional variable in $X_1$. To do so, we'll first re-define our function for setting up and solving the nested logit problem.
###Code
def solve_nl2(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['group_share'] = groups['shares'].transform(np.sum)
df['within_share'] = df['shares'] / df['group_share']
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl2_formulation = pyblp.Formulation('0 + prices + log(within_share)')
problem = pyblp.Problem(nl2_formulation, df.drop(columns=['nesting_ids']))
return problem.solve()
###Output
_____no_output_____
###Markdown
Again, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
nl2_results1 = solve_nl2(df1)
nl2_results1
###Output
_____no_output_____
###Markdown
And again, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
nl2_results2 = solve_nl2(df2)
nl2_results2
###Output
_____no_output_____
###Markdown
One can observe that we obtain parameter estimates which are quite different than above.
###Code
nl2_results1.beta[0] / (1 - nl2_results1.beta[1])
nl2_results2.beta[0] / (1 - nl2_results2.beta[1])
###Output
_____no_output_____
###Markdown
Logit and Nested Logit Tutorial
###Code
import pyblp
import numpy as np
import pandas as pd
pyblp.options.digits = 2
pyblp.options.verbose = False
pyblp.__version__
###Output
_____no_output_____
###Markdown
In this tutorial, we'll use data from :ref:`references:Nevo (2000)` to solve the paper's fake cereal problem. Locations of CSV files that contain the data are in the :mod:`data` module.We will compare two simple models, the plain (IIA) logit model and the nested logit (GEV) model using the fake cereal dataset of :ref:`references:Nevo (2000)`. Theory of Plain LogitLet's start with the plain logit model under independence of irrelevant alternatives (IIA). In this model (indirect) utility is given by$$U_{ijt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} + \epsilon_{ijt},$$where $\epsilon_{ijt}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. It is common to normalize the mean utility of the outside good to zero so that $U_{i0t} = \epsilon_{i0t}$. This gives us aggregate market shares$$s_{jt} = \frac{\exp(\alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt})}{1 + \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt})}.$$If we take logs we get$$\log s_{jt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} - \log \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt})$$and$$\log s_{0t} = -\log \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt}).$$By differencing the above we get a linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}.$$Because the left hand side is data, we can estimate this model using linear IV GMM. Application of Plain LogitA Logit :class:`Problem` can be created by simply excluding the formulation for the nonlinear parameters, $X_2$, along with any agent information. In other words, it requires only specifying the _linear component_ of demand.We'll set up and solve a simple version of the fake data cereal problem from :ref:`references:Nevo (2000)`. Since we won't include any demand-side nonlinear characteristics or parameters, we don't have to worry about configuring an optimization routine.There are some important reserved variable names:- `market_ids` are the unique market identifiers which we subscript with $t$.- `shares` specifies the market shares which need to be between zero and one, and within a market ID, $\sum_{j} s_{jt} \leq 1$.- `prices` are prices $p_{jt}$. These have some special properties and are _always_ treated as endogenous.- `demand_instruments0`, `demand_instruments1`, and so on are numbered demand instruments. These represent only the _excluded_ instruments. The exogenous regressors in $X_1$ will be automatically added to the set of instruments.We begin with two steps:1. Load the product data which at a minimum consists of `market_ids`, `shares`, `prices`, and at least a single column of demand instruments, `demand_instruments0`.2. Define a :class:`Formulation` for the $X_1$ (linear) demand model. - This and all other formulas are similar to R and [patsy](https://patsy.readthedocs.io/en/stable/) formulas. - It includes a constant by default. To exclude the constant, specify either a `0` or a `-1`. - To efficiently include fixed effects, use the `absorb` option and specify which categorical variables you would like to absorb. - Some model reduction may happen automatically. The constant will be excluded if you include fixed effects and some precautions are taken against collinearity. However, you will have to make sure that differently-named variables are not collinear. 3. Combine the :class:`Formulation` and product data to construct a :class:`Problem`.4. Use :meth:`Problem.solve` to estimate paramters. Loading the DataThe `product_data` argument of :class:`Problem` should be a structured array-like object with fields that store data. Product data can be a structured [NumPy](https://numpy.org/) array, a [pandas](https://pandas.pydata.org/) DataFrame, or other similar objects.
###Code
product_data = pd.read_csv(pyblp.data.NEVO_PRODUCTS_LOCATION)
product_data.head()
###Output
_____no_output_____
###Markdown
The product data contains `market_ids`, `product_ids`, `firm_ids`, `shares`, `prices`, a number of other IDs and product characteristics, and some pre-computed excluded `demand_instruments0`, `demand_instruments1`, and so on. The `product_ids` will be incorporated as fixed effects. For more information about the instruments and the example data as a whole, refer to the :mod:`data` module. Setting Up the ProblemWe can combine the :class:`Formulation` and `product_data` to construct a :class:`Problem`. We pass the :class:`Formulation` first and the `product_data` second. We can also display the properties of the problem by typing its name.
###Code
logit_formulation = pyblp.Formulation('prices', absorb='C(product_ids)')
logit_formulation
problem = pyblp.Problem(logit_formulation, product_data)
problem
###Output
_____no_output_____
###Markdown
Two sets of properties are displayed:1. Dimensions of the data.2. Formulations of the problem.The dimensions describe the shapes of matrices as laid out in :ref:`notation:Notation`. They include:- $T$ is the number of markets.- $N$ is the length of the dataset (the number of products across all markets).- $F$ is the number of firms, which we won't use in this example.- $K_1$ is the dimension of the linear demand parameters.- $M_D$ is the dimension of the instrument variables (excluded instruments and exogenous regressors).- $E_D$ is the number of fixed effect dimensions (one-dimensional fixed effects, two-dimensional fixed effects, etc.).There is only a single :class:`Formulation` for this model. - $X_1$ is the linear component of utility for demand and depends only on prices (after the fixed effects are removed). Solving the ProblemThe :meth:`Problem.solve` method always returns a :class:`ProblemResults` class, which can be used to compute post-estimation outputs. See the [post estimation](post_estimation.ipynb) tutorial for more information.
###Code
logit_results = problem.solve()
logit_results
###Output
_____no_output_____
###Markdown
Theory of Nested LogitWe can extend the logit model to allow for correlation within a group $h$ so that$$U_{ijt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} + \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{ijt}.$$Now, we require that $\epsilon_{jti} = \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{jti}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. As $\rho \rightarrow 1$, all consumers stay within their group. As $\rho \rightarrow 0$, this collapses to the IIA logit. Note that if we wanted, we could allow $\rho$ to differ between groups with the notation $\rho_{h(j)}$.This gives us aggregate market shares as the product of two logits, the within group logit and the across group logit:$$s_{jt} = \frac{\exp[V_{jt} / (1 - \rho)]}{\exp[V_{h(j)t} / (1 - \rho)]}\cdot\frac{\exp V_{h(j)t}}{1 + \sum_h \exp V_{ht}},$$where $V_{jt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}$.After some work we again obtain the linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt}+ x_{jt} \beta^\text{ex} +\rho \log s_{j|h(j)t} + \xi_{jt},$$where $s_{j|h(j)t} = s_{jt} / s_{h(j)t}$ and $s_{h(j)t}$ is the share of group $h$ in market $t$. See :ref:`references:Berry (1994)` or :ref:`references:Cardell (1997)` for more information.Again, the left hand side is data, though the $\ln s_{j|h(j)t}$ is clearly endogenous which means we must instrument for it. Rather than include $\ln s_{j|h(j)t}$ along with the linear components of utility, $X_1$, whenever `nesting_ids` are included in `product_data`, $\rho$ is treated as a nonlinear $X_2$ parameter. This means that the linear component is given instead by$$\log s_{jt} - \log s_{0t} - \rho \log s_{j|h(j)t} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}.$$This is done for two reasons:1. It forces the user to treat $\rho$ as an endogenous parameter.2. It extends much more easily to the RCNL model of :ref:`references:Brenkers and Verboven (2006)`.A common choice for an additional instrument is the number of products per nest. Application of Nested LogitBy including `nesting_ids` (another reserved name) as a field in `product_data`, we tell the package to estimate a nested logit model, and we don't need to change any of the formulas. We show how to construct the category groupings in two different ways:1. We put all products in a single nest (only the outside good in the other nest).2. We put products into two nests (either mushy or non-mushy).We also construct an additional instrument based on the number of products per nest. Typically this is useful as a source of exogenous variation in the within group share $\ln s_{j|h(j)t}$. However, in this example because the number of products per nest does not vary across markets, if we include product fixed effects, this instrument is irrelevant.We'll define a function that constructs the additional instrument and solves the nested logit problem. We'll exclude product ID fixed effects, which are collinear with `mushy`, and we'll choose $\rho = 0.7$ as the initial value at which the optimization routine will start.
###Code
def solve_nl(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl_formulation = pyblp.Formulation('0 + prices')
problem = pyblp.Problem(nl_formulation, df)
return problem.solve(rho=0.7)
###Output
_____no_output_____
###Markdown
First, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
df1 = product_data.copy()
df1['nesting_ids'] = 1
nl_results1 = solve_nl(df1)
nl_results1
###Output
_____no_output_____
###Markdown
When we inspect the :class:`Problem`, the only changes from the plain logit model is the additional instrument that contributes to $M_D$ and the inclusion of $H$, the number of nesting categories.
###Code
nl_results1.problem
###Output
_____no_output_____
###Markdown
Next, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
df2 = product_data.copy()
df2['nesting_ids'] = df2['mushy']
nl_results2 = solve_nl(df2)
nl_results2
###Output
_____no_output_____
###Markdown
For both cases we find that $\hat{\rho} > 0.8$.Finally, we'll also look at the adjusted parameter on prices, $\alpha / (1-\rho)$.
###Code
nl_results1.beta[0] / (1 - nl_results1.rho)
nl_results2.beta[0] / (1 - nl_results2.rho)
###Output
_____no_output_____
###Markdown
Treating Within Group Shares as ExogenousThe package is designed to prevent the user from treating the within group share, $\log s_{j|h(j)t}$, as an exogenous variable. For example, if we were to compute a `group_share` variable and use the algebraic functionality of :class:`Formulation` by including the expression `log(shares / group_share)` in our formula for $X_1$, the package would raise an error because the package knows that `shares` should not be included in this formulation.To demonstrate why this is a bad idea, we'll override this feature by calculating $\log s_{j|h(j)t}$ and including it as an additional variable in $X_1$. To do so, we'll first re-define our function for setting up and solving the nested logit problem.
###Code
def solve_nl2(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['group_share'] = groups['shares'].transform(np.sum)
df['within_share'] = df['shares'] / df['group_share']
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl2_formulation = pyblp.Formulation('0 + prices + log(within_share)')
problem = pyblp.Problem(nl2_formulation, df.drop(columns=['nesting_ids']))
return problem.solve()
###Output
_____no_output_____
###Markdown
Again, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
nl2_results1 = solve_nl2(df1)
nl2_results1
###Output
_____no_output_____
###Markdown
And again, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
nl2_results2 = solve_nl2(df2)
nl2_results2
###Output
_____no_output_____
###Markdown
One can observe that we obtain parameter estimates which are quite different than above.
###Code
nl2_results1.beta[0] / (1 - nl2_results1.beta[1])
nl2_results2.beta[0] / (1 - nl2_results2.beta[1])
###Output
_____no_output_____
###Markdown
Logit and Nested Logit Tutorial
###Code
import pyblp
import numpy as np
import pandas as pd
pyblp.options.digits = 2
pyblp.options.verbose = False
pyblp.__version__
###Output
_____no_output_____
###Markdown
In this tutorial, we'll use data from :ref:`references:Nevo (2000)` to solve the paper's fake cereal problem. Locations of CSV files that contain the data are in the :mod:`data` module.We will compare two simple models, the plain (IIA) logit model and the nested logit (GEV) model using the fake cereal dataset of :ref:`references:Nevo (2000)`. Theory of Plain LogitLet's start with the plain logit model under independence of irrelevant alternatives (IIA). In this model (indirect) utility is given by$$U_{ijt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} + \epsilon_{ijt},$$where $\epsilon_{ijt}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. It is common to normalize the mean utility of the outside good to zero so that $U_{i0t} = \epsilon_{i0t}$. This gives us aggregate market shares$$s_{jt} = \frac{\exp(\alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt})}{1 + \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt})}.$$If we take logs we get$$\log s_{jt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} - \log \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt})$$and$$\log s_{0t} = -\log \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt}).$$By differencing the above we get a linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}.$$Because the left hand side is data, we can estimate this model using linear IV GMM. Application of Plain LogitA Logit :class:`Problem` can be created by simply excluding the formulation for the nonlinear parameters, $X_2$, along with any agent information. In other words, it requires only specifying the _linear component_ of demand.We'll set up and solve a simple version of the fake data cereal problem from :ref:`references:Nevo (2000)`. Since we won't include any demand-side nonlinear characteristics or parameters, we don't have to worry about configuring an optimization routine.There are some important reserved variable names:- `market_ids` are the unique market identifiers which we subscript with $t$.- `shares` specifies the market shares which need to be between zero and one, and within a market ID, $\sum_{j} s_{jt} \leq 1$.- `prices` are prices $p_{jt}$. These have some special properties and are _always_ treated as endogenous.- `demand_instruments0`, `demand_instruments1`, and so on are numbered demand instruments. These represent only the _excluded_ instruments. The exogenous regressors in $X_1$ will be automatically added to the set of instruments.We begin with two steps:1. Load the product data which at a minimum consists of `market_ids`, `shares`, `prices`, and at least a single column of demand instruments, `demand_instruments0`.2. Define a :class:`Formulation` for the $X_1$ (linear) demand model. - This and all other formulas are similar to R and [patsy](https://patsy.readthedocs.io/en/stable/) formulas. - It includes a constant by default. To exclude the constant, specify either a `0` or a `-1`. - To efficiently include fixed effects, use the `absorb` option and specify which categorical variables you would like to absorb. - Some model reduction may happen automatically. The constant will be excluded if you include fixed effects and some precautions are taken against collinearity. However, you will have to make sure that differently-named variables are not collinear. 3. Combine the :class:`Formulation` and product data to construct a :class:`Problem`.4. Use :meth:`Problem.solve` to estimate paramters. Loading the DataThe `product_data` argument of :class:`Problem` should be a structured array-like object with fields that store data. Product data can be a structured [NumPy](https://numpy.org/) array, a [pandas](https://pandas.pydata.org/) DataFrame, or other similar objects.
###Code
product_data = pd.read_csv(pyblp.data.NEVO_PRODUCTS_LOCATION)
product_data.head()
###Output
_____no_output_____
###Markdown
The product data contains `market_ids`, `product_ids`, `firm_ids`, `shares`, `prices`, a number of other IDs and product characteristics, and some pre-computed excluded `demand_instruments0`, `demand_instruments1`, and so on. The `product_ids` will be incorporated as fixed effects. For more information about the instruments and the example data as a whole, refer to the :mod:`data` module. Setting Up the ProblemWe can combine the :class:`Formulation` and `product_data` to construct a :class:`Problem`. We pass the :class:`Formulation` first and the `product_data` second. We can also display the properties of the problem by typing its name.
###Code
logit_formulation = pyblp.Formulation('prices', absorb='C(product_ids)')
logit_formulation
problem = pyblp.Problem(logit_formulation, product_data)
problem
###Output
_____no_output_____
###Markdown
Two sets of properties are displayed:1. Dimensions of the data.2. Formulations of the problem.The dimensions describe the shapes of matrices as laid out in :ref:`notation:Notation`. They include:- $T$ is the number of markets.- $N$ is the length of the dataset (the number of products across all markets).- $F$ is the number of firms, which we won't use in this example.- $K_1$ is the dimension of the linear demand parameters.- $M_D$ is the dimension of the instrument variables (excluded instruments and exogenous regressors).- $E_D$ is the number of fixed effect dimensions (one-dimensional fixed effects, two-dimensional fixed effects, etc.).There is only a single :class:`Formulation` for this model. - $X_1$ is the linear component of utility for demand and depends only on prices (after the fixed effects are removed). Solving the ProblemThe :meth:`Problem.solve` method always returns a :class:`ProblemResults` class, which can be used to compute post-estimation outputs. See the [post estimation](post_estimation.ipynb) tutorial for more information.
###Code
logit_results = problem.solve()
logit_results
###Output
_____no_output_____
###Markdown
Theory of Nested LogitWe can extend the logit model to allow for correlation within a group $h$ so that$$U_{ijt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} + \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{ijt}.$$Now, we require that $\epsilon_{jti} = \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{jti}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. As $\rho \rightarrow 1$, all consumers stay within their group. As $\rho \rightarrow 0$, this collapses to the IIA logit. Note that if we wanted, we could allow $\rho$ to differ between groups with the notation $\rho_{h(j)}$.This gives us aggregate market shares as the product of two logits, the within group logit and the across group logit:$$s_{jt} = \frac{\exp[V_{jt} / (1 - \rho)]}{\exp[V_{h(j)t} / (1 - \rho)]}\cdot\frac{\exp V_{h(j)t}}{1 + \sum_h \exp V_{ht}},$$where $V_{jt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}$.After some work we again obtain the linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt}+ x_{jt} \beta^\text{ex} +\rho \log s_{j|h(j)t} + \xi_{jt},$$where $s_{j|h(j)t} = s_{jt} / s_{h(j)t}$ and $s_{h(j)t}$ is the share of group $h$ in market $t$. See :ref:`references:Berry (1994)` or :ref:`references:Cardell (1997)` for more information.Again, the left hand side is data, though the $\ln s_{j|h(j)t}$ is clearly endogenous which means we must instrument for it. Rather than include $\ln s_{j|h(j)t}$ along with the linear components of utility, $X_1$, whenever `nesting_ids` are included in `product_data`, $\rho$ is treated as a nonlinear $X_2$ parameter. This means that the linear component is given instead by$$\log s_{jt} - \log s_{0t} - \rho \log s_{j|h(j)t} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}.$$This is done for two reasons:1. It forces the user to treat $\rho$ as an endogenous parameter.2. It extends much more easily to the RCNL model of :ref:`references:Brenkers and Verboven (2006)`.A common choice for an additional instrument is the number of products per nest. Application of Nested LogitBy including `nesting_ids` (another reserved name) as a field in `product_data`, we tell the package to estimate a nested logit model, and we don't need to change any of the formulas. We show how to construct the category groupings in two different ways:1. We put all products in a single nest (only the outside good in the other nest).2. We put products into two nests (either mushy or non-mushy).We also construct an additional instrument based on the number of products per nest. Typically this is useful as a source of exogenous variation in the within group share $\ln s_{j|h(j)t}$. However, in this example because the number of products per nest does not vary across markets, if we include product fixed effects, this instrument is irrelevant.We'll define a function that constructs the additional instrument and solves the nested logit problem. We'll exclude product ID fixed effects, which are collinear with `mushy`, and we'll choose $\rho = 0.7$ as the initial value at which the optimization routine will start.
###Code
def solve_nl(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl_formulation = pyblp.Formulation('0 + prices')
problem = pyblp.Problem(nl_formulation, df)
return problem.solve(rho=0.7)
###Output
_____no_output_____
###Markdown
First, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
df1 = product_data.copy()
df1['nesting_ids'] = 1
nl_results1 = solve_nl(df1)
nl_results1
###Output
_____no_output_____
###Markdown
When we inspect the :class:`Problem`, the only changes from the plain logit model is the additional instrument that contributes to $M_D$ and the inclusion of $H$, the number of nesting categories.
###Code
nl_results1.problem
###Output
_____no_output_____
###Markdown
Next, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
df2 = product_data.copy()
df2['nesting_ids'] = df2['mushy']
nl_results2 = solve_nl(df2)
nl_results2
###Output
_____no_output_____
###Markdown
For both cases we find that $\hat{\rho} > 0.8$.Finally, we'll also look at the adjusted parameter on prices, $\alpha / (1-\rho)$.
###Code
nl_results1.beta[0] / (1 - nl_results1.rho)
nl_results2.beta[0] / (1 - nl_results2.rho)
###Output
_____no_output_____
###Markdown
Treating Within Group Shares as ExogenousThe package is designed to prevent the user from treating the within group share, $\log s_{j|h(j)t}$, as an exogenous variable. For example, if we were to compute a `group_share` variable and use the algebraic functionality of :class:`Formulation` by including the expression `log(shares / group_share)` in our formula for $X_1$, the package would raise an error because the package knows that `shares` should not be included in this formulation.To demonstrate why this is a bad idea, we'll override this feature by calculating $\log s_{j|h(j)t}$ and including it as an additional variable in $X_1$. To do so, we'll first re-define our function for setting up and solving the nested logit problem.
###Code
def solve_nl2(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['group_share'] = groups['shares'].transform(np.sum)
df['within_share'] = df['shares'] / df['group_share']
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl2_formulation = pyblp.Formulation('0 + prices + log(within_share)')
problem = pyblp.Problem(nl2_formulation, df.drop(columns=['nesting_ids']))
return problem.solve()
###Output
_____no_output_____
###Markdown
Again, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
nl2_results1 = solve_nl2(df1)
nl2_results1
###Output
_____no_output_____
###Markdown
And again, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
nl2_results2 = solve_nl2(df2)
nl2_results2
###Output
_____no_output_____
###Markdown
One can observe that we obtain parameter estimates which are quite different than above.
###Code
nl2_results1.beta[0] / (1 - nl2_results1.beta[1])
nl2_results2.beta[0] / (1 - nl2_results2.beta[1])
###Output
_____no_output_____
###Markdown
Logit and Nested Logit Tutorial
###Code
import pyblp
import numpy as np
import pandas as pd
pyblp.options.digits = 2
pyblp.options.verbose = False
pyblp.__version__
###Output
_____no_output_____
###Markdown
In this tutorial, we'll use data from :ref:`references:Nevo (2000)` to solve the paper's fake cereal problem. Locations of CSV files that contain the data are in the :mod:`data` module.We will compare two simple models, the plain (IIA) logit model and the nested logit (GEV) model using the fake cereal dataset of :ref:`references:Nevo (2000)`. Theory of Plain LogitLet's start with the plain logit model under independence of irrelevant alternatives (IIA). In this model (indirect) utility is given by$$U_{jti} = x_{jt} \beta - \alpha p_{jt} + \xi_{jt} + \epsilon_{jti},$$where $\varepsilon_{jti}$ is distributed IID with the Type I extreme value (Gumbel) distribution. It is common to normalize the mean utility of the outside good to zero so that $U_{0ti} = \varepsilon_{0ti}$. This gives us aggregate marketshares$$s_{jt} = \frac{\exp(x_{jt} \beta - \alpha p_{jt} + \xi_{jt})}{\sum_k \exp(x_{kt} \beta - \alpha p_{jt} + \xi_{kt})}.$$If we take logs we get$$\ln s_{jt} = x_{jt} \beta - \alpha p_{jt} + \xi_{jt} - \ln \sum_k \exp(x_{kt} \beta - \alpha p_{jt} + \xi_{kt})$$and$$\ln s_{0t} = 0 - \ln \sum_k \exp(x_{kt} \beta - \alpha p_{jt} + \xi_{kt}).$$By differencing the above we get a linear estimating equation:$$\ln s_{jt} - \ln s_{0t} = x_{jt}\beta - \alpha p_{jt} + \xi_{jt}.$$Because the left hand side is data, we can estimate this model using linear IV GMM. Application of Plain LogitA Logit :class:`Problem` can be created by simply excluding the formulation for the nonlinear parameters, $X_2$, along with any agent information. In other words, it requires only specifying the _linear component_ of demand.We'll set up and solve a simple version of the fake data cereal problem from :ref:`references:Nevo (2000)`. Since we won't include any nonlinear characteristics or parameters, we don't have to worry about configuring an optimization routine.There are some important reserved variable names:- `market_ids` are the unique market identifiers which we subscript with $t$.- `product_ids` are the unique product identifiers which we subscript with $j$. These are optional.- `shares` specifies the marketshares which need to be between zero and one, and within a market ID, $\sum_{j} s_{jt} \leq 1$.- `prices` are prices $p_{jt}$. These have some special properties and are _always_ treated as endogenous.- `demand_instruments0`, `demand_instruments1`, and so on are numbered demand instruments. These represent only the _excluded_ instruments. The exogenous regressors in $X_1$ will be automatically added to the set of instruments.We begin with two steps:1. Load the `product data` which at a minimum consists of `market_ids`, `shares`, `prices`, and at least a single column of demand instruments, `demand_instruments0`.2. Define a :class:`Formulation` for the $X_1$ (linear) demand model. - This and all other formulas are similar to R and [patsy](https://patsy.readthedocs.io/en/stable/) formulas. - It includes a constant by default. To exclude the constant either specify a `0` or a `-1`. - To efficiently include fixed effects, use the `absorb` option and specify which categorical variable(s) you would like to absorb. - Some model reduction may happen automatically. The constant will be excluded if you include fixed effects and some precautions are taken against collinearity. However, you will have to make sure that differently-named variables are not collinear. 3. Combine the :class:`Formulation` and `product data` to construct a :class:`Problem`.4. Use :meth:`Problem.solve` to estimate paramters. Loading the DataThe `product_data` argument of :class:`Problem` should be a structured array-like object with fields that store data. Product data can be a structured [NumPy](https://www.numpy.org/) array, a [pandas](https://pandas.pydata.org/) DataFrame, or other similar objects.
###Code
product_data = pd.read_csv(pyblp.data.NEVO_PRODUCTS_LOCATION)
product_data.head()
###Output
_____no_output_____
###Markdown
The product data contains `market_ids`, `product_ids`, `firm_ids`, `shares`, `prices`, a number of product characteristics, and some pre-computed excluded `demand_instruments0`, `demand_instruments1`, and so on. The `product_ids` will be used to construct fixed effects. For more information about the instruments and the example data as a whole, refer to the :mod:`data` module. Setting Up the ProblemWe can combine the :class:`Formulation` and the `product_data` and construct a :class:`Problem`. We pass the :class:`Formulation` first and the `product_data` second. We can also display the properties of the problem by typing its name.
###Code
logit_formulation = pyblp.Formulation('prices', absorb='C(product_ids)')
logit_formulation
problem = pyblp.Problem(logit_formulation, product_data)
problem
###Output
_____no_output_____
###Markdown
Two sets of properties are displayed:1. Dimensions of the data.2. Formulations of the problem.The dimensions describe the shapes of matrices as layed out in :ref:`notation:Notation`. They include:- $N$ is the length of the dataset (all products and markets).- $T$ is the number of markets.- $F$ is the number of firms, which we won't use in this example.- $K_1$ is the dimension of the linear demand parameters.- $M_D$ is the dimension of the instrument variables (excluded instruments and exogenous regressors).- $E_D$ is the number of fixed effects (one-dimensional fixed effects, two-dimensional fixed effects, etc.).There is only a single :class:`Formulation` for this model. - $X_1$ is the linear component of utility for demand and depends only on prices (after the fixed effects are removed). Solving the ProblemThe :meth:`Problem.solve` method always returns a :class:`ProblemResults` class, which can be used to compute post-estimation outputs. See the [post-estimation](post_estimation.ipynb) for more information.
###Code
logit_results = problem.solve()
logit_results
###Output
_____no_output_____
###Markdown
Multicollinearity IllustrationAs an illustration, let's estimate a model with an obvious multicollinearity problem and see what happens. Suppose we include the variables `mushy` and `sugar` in the model. Because they don't vary within `product_ids`, they are absorbed into the fixed effects.
###Code
collinear_formulation = pyblp.Formulation('prices + mushy + sugar', absorb='C(product_ids)')
pyblp.Problem(collinear_formulation, product_data).solve()
###Output
_____no_output_____
###Markdown
Notice that we get the same results as before and we do not estimate coefficients on `mushy` or `sugar`. Although multicollinearity did not pose a problem here, in other cases it may create errors. Theory of Nested LogitWe can extend the logit model to allow for correlation within a group $h$ so that$$U_{jti} = x_{jt} \beta - \alpha p_{jt} + \xi_{jt} + \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{jti}.$$Now, we require that $\epsilon_{jti} = \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{jti}$ is distributed IID with the Type I extreme value (Gumbel) distribution. As $\rho \rightarrow 1$, all consumers stay within their group. As $\rho \rightarrow 0$, this collapses to the IIA logit. Note that if we wanted, we could allow $\rho$ to differ between groups with the notation $\rho_{h(j)}$.This gives us aggregate marketshares as the product of two logits, the within group logit and the across group logit:$$s_{jt} = \frac{\exp[V_{jt} / (1 - \rho)]}{\exp[V_{h(j)t} / (1 - \rho)]}\cdot\frac{\exp V_{h(j)t}}{1 + \sum_h \exp V_{ht}},$$where $V_{jt} = x_{jt} \beta - \alpha p_{jt} + \xi_{jt}$.After some work we again obtain the linear estimating equation:$$\ln s_{jt} - \ln s_{0t} = x_{jt}\beta - \alpha p_{jt} +\rho \ln s_{j|h(j)t} + \xi_{jt},$$where $s_{j|h(j)t} = s_{jt} / s_{h(j)t}$ and $s_{h(j)t}$ is the share of group $h$ in market $t$. See :ref:`references:Berry (1994)` or :ref:`references:Cardell (1997)` for more information.Again, the left hand side is data, though the $\ln s_{j|h(j)t}$ is clearly endogenous which means we must instrument for it. Rather than include $\ln s_{j|h(j)t}$ along with the linear components of utility, $X_1$, whenever `nesting_ids` are included in `product_data`, $\rho$ is treated as a nonlinear $X_2$ parameter. This means that the linear component is given instead by$$\ln s_{jt} - \ln s_{0t} -\rho \ln s_{j|h(j)t} = x_{jt}\beta - \alpha p_{jt} + \xi_{jt}.$$This is done for two reasons:1. It forces the user to treat $\rho$ as an endogenous parameter.2. It extends much more easily to the RCNL model of :ref:`references:Grigolon and Verboven (2014)`.A common choice for an additional instrument is the number of products per nest. Application of Nested LogitBy including `nesting_ids` (another reserved name) as a field in `product_data`, we tell the package to estimate a nested logit model, and we don't need to change any of the formulas. We show how to construct the category groupings in two different ways:1. We put all products in a single nest (only the outside good in the other nest).2. We put products into two nests (either mushy or non-mushy).We also construct an additional instrument based on the number of products per nest. Typically this is useful as a source of exogenous variation in the within group share $\ln s_{j|h(j)t}$. However, in this example because the number of products per nest do not vary across markets, if we include product fixed effects, this instrument is irrelevant.We'll define a function that constructs the additional instrument and solves the nested logit problem. We'll exclude product ID fixed effects, which are collinear with `mushy,` and we'll choose $\rho = 0.7$ as the initial value at which the optimization routine will start.
###Code
def solve_nl(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl_formulation = pyblp.Formulation('0 + prices')
problem = pyblp.Problem(nl_formulation, df)
return problem.solve(rho=0.7)
###Output
_____no_output_____
###Markdown
First, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
df1 = product_data.copy()
df1['nesting_ids'] = 1
nl_results1 = solve_nl(df1)
nl_results1
###Output
_____no_output_____
###Markdown
When we inspect the :class:`Problem`, the only changes from the plain logit model is the additional instrument that contributes to $M_D$ and the inclusion of $H$, the number of nesting categories.
###Code
nl_results1.problem
###Output
_____no_output_____
###Markdown
Next, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
df2 = product_data.copy()
df2['nesting_ids'] = df2['mushy']
nl_results2 = solve_nl(df2)
nl_results2
###Output
_____no_output_____
###Markdown
For both cases we find that $\hat{\rho} > 0.8$.Finally, we'll also look at the adjusted parameter on prices, $\alpha / (1-\rho)$.
###Code
nl_results1.beta[0] / (1 - nl_results1.rho)
nl_results2.beta[0] / (1 - nl_results2.rho)
###Output
_____no_output_____
###Markdown
Treating Within Group Shares as ExogenousThe package is designed to prevent the user from treating the within group share, $\log s_{j|h(j)t}$, as an exogenous variable. For example, if we were to compute a `group_share` variable and use the algebraic functionality of :class:`Formulation` by including the expression `log(shares / group_share)` in our formula for $X_1$, the package would raise an error because `shares` should not be included in this formulation.In order to demonstrate why this is a bad idea, we override this feature by calculating $\log s_{j|h(j)t}$ and including it as an additional variable in $X_1$. To do so, we'll first re-define our function for setting up and solving the nested logit problem.
###Code
def solve_nl2(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['group_share'] = groups['shares'].transform(np.sum)
df['within_share'] = df['shares'] / df['group_share']
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl2_formulation = pyblp.Formulation('0 + prices + log(within_share)')
problem = pyblp.Problem(nl2_formulation, df.drop(columns=['nesting_ids']))
return problem.solve()
###Output
_____no_output_____
###Markdown
Again, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
nl2_results1 = solve_nl2(df1)
nl2_results1
###Output
_____no_output_____
###Markdown
And again, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
nl2_results2 = solve_nl2(df2)
nl2_results2
###Output
_____no_output_____
###Markdown
One can observe that we obtain parameter estimates which are quite different than above.
###Code
nl2_results1.beta[0] / (1 - nl2_results1.beta[1])
nl2_results2.beta[0] / (1 - nl2_results2.beta[1])
###Output
_____no_output_____
###Markdown
Logit and Nested Logit Tutorial
###Code
import pyblp
import numpy as np
import pandas as pd
pyblp.options.digits = 2
pyblp.options.verbose = False
pyblp.__version__
###Output
_____no_output_____
###Markdown
In this tutorial, we'll use data from :ref:`references:Nevo (2000)` to solve the paper's fake cereal problem. Locations of CSV files that contain the data are in the :mod:`data` module.We will compare two simple models, the plain (IIA) logit model and the nested logit (GEV) model using the fake cereal dataset of :ref:`references:Nevo (2000)`. Theory of Plain LogitLet's start with the plain logit model under independence of irrelevant alternatives (IIA). In this model (indirect) utility is given by$$U_{ijt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} + \epsilon_{ijt},$$where $\epsilon_{ijt}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. It is common to normalize the mean utility of the outside good to zero so that $U_{i0t} = \epsilon_{i0t}$. This gives us aggregate market shares$$s_{jt} = \frac{\exp(\alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt})}{1 + \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt})}.$$If we take logs we get$$\log s_{jt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} - \log \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt})$$and$$\log s_{0t} = -\log \sum_k \exp(\alpha p_{kt} + x_{kt} \beta^\text{ex} + \xi_{kt}).$$By differencing the above we get a linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}.$$Because the left hand side is data, we can estimate this model using linear IV GMM. Application of Plain LogitA Logit :class:`Problem` can be created by simply excluding the formulation for the nonlinear parameters, $X_2$, along with any agent information. In other words, it requires only specifying the _linear component_ of demand.We'll set up and solve a simple version of the fake data cereal problem from :ref:`references:Nevo (2000)`. Since we won't include any demand-side nonlinear characteristics or parameters, we don't have to worry about configuring an optimization routine.There are some important reserved variable names:- `market_ids` are the unique market identifiers which we subscript with $t$.- `shares` specifies the market shares which need to be between zero and one, and within a market ID, $\sum_{j} s_{jt} \leq 1$.- `prices` are prices $p_{jt}$. These have some special properties and are _always_ treated as endogenous.- `demand_instruments0`, `demand_instruments1`, and so on are numbered demand instruments. These represent only the _excluded_ instruments. The exogenous regressors in $X_1$ will be automatically added to the set of instruments.We begin with two steps:1. Load the product data which at a minimum consists of `market_ids`, `shares`, `prices`, and at least a single column of demand instruments, `demand_instruments0`.2. Define a :class:`Formulation` for the $X_1$ (linear) demand model. - This and all other formulas are similar to R and [patsy](https://patsy.readthedocs.io/en/stable/) formulas. - It includes a constant by default. To exclude the constant, specify either a `0` or a `-1`. - To efficiently include fixed effects, use the `absorb` option and specify which categorical variables you would like to absorb. - Some model reduction may happen automatically. The constant will be excluded if you include fixed effects and some precautions are taken against collinearity. However, you will have to make sure that differently-named variables are not collinear. 3. Combine the :class:`Formulation` and product data to construct a :class:`Problem`.4. Use :meth:`Problem.solve` to estimate paramters. Loading the DataThe `product_data` argument of :class:`Problem` should be a structured array-like object with fields that store data. Product data can be a structured [NumPy](https://numpy.org/) array, a [pandas](https://pandas.pydata.org/) DataFrame, or other similar objects.
###Code
product_data = pd.read_csv(pyblp.data.NEVO_PRODUCTS_LOCATION)
product_data.head()
###Output
_____no_output_____
###Markdown
The product data contains `market_ids`, `product_ids`, `firm_ids`, `shares`, `prices`, a number of other IDs and product characteristics, and some pre-computed excluded `demand_instruments0`, `demand_instruments1`, and so on. The `product_ids` will be incorporated as fixed effects. For more information about the instruments and the example data as a whole, refer to the :mod:`data` module. Setting Up the ProblemWe can combine the :class:`Formulation` and `product_data` to construct a :class:`Problem`. We pass the :class:`Formulation` first and the `product_data` second. We can also display the properties of the problem by typing its name.
###Code
logit_formulation = pyblp.Formulation('prices', absorb='C(product_ids)')
logit_formulation
problem = pyblp.Problem(logit_formulation, product_data)
problem
###Output
_____no_output_____
###Markdown
Two sets of properties are displayed:1. Dimensions of the data.2. Formulations of the problem.The dimensions describe the shapes of matrices as laid out in :ref:`notation:Notation`. They include:- $T$ is the number of markets.- $N$ is the length of the dataset (the number of products across all markets).- $F$ is the number of firms, which we won't use in this example.- $K_1$ is the dimension of the linear demand parameters.- $M_D$ is the dimension of the instrument variables (excluded instruments and exogenous regressors).- $E_D$ is the number of fixed effect dimensions (one-dimensional fixed effects, two-dimensional fixed effects, etc.).There is only a single :class:`Formulation` for this model. - $X_1$ is the linear component of utility for demand and depends only on prices (after the fixed effects are removed). Solving the ProblemThe :meth:`Problem.solve` method always returns a :class:`ProblemResults` class, which can be used to compute post-estimation outputs. See the [post estimation](post_estimation.ipynb) tutorial for more information.
###Code
logit_results = problem.solve()
logit_results
###Output
_____no_output_____
###Markdown
Theory of Nested LogitWe can extend the logit model to allow for correlation within a group $h$ so that$$U_{ijt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt} + \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{ijt}.$$Now, we require that $\epsilon_{jti} = \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{jti}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. As $\rho \rightarrow 1$, all consumers stay within their group. As $\rho \rightarrow 0$, this collapses to the IIA logit. Note that if we wanted, we could allow $\rho$ to differ between groups with the notation $\rho_{h(j)}$.This gives us aggregate market shares as the product of two logits, the within group logit and the across group logit:$$s_{jt} = \frac{\exp[V_{jt} / (1 - \rho)]}{\exp[V_{h(j)t} / (1 - \rho)]}\cdot\frac{\exp V_{h(j)t}}{1 + \sum_h \exp V_{ht}},$$where $V_{jt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}$.After some work we again obtain the linear estimating equation:$$\log s_{jt} - \log s_{0t} = \alpha p_{jt}+ x_{jt} \beta^\text{ex} +\rho \log s_{j|h(j)t} + \xi_{jt},$$where $s_{j|h(j)t} = s_{jt} / s_{h(j)t}$ and $s_{h(j)t}$ is the share of group $h$ in market $t$. See :ref:`references:Berry (1994)` or :ref:`references:Cardell (1997)` for more information.Again, the left hand side is data, though the $\ln s_{j|h(j)t}$ is clearly endogenous which means we must instrument for it. Rather than include $\ln s_{j|h(j)t}$ along with the linear components of utility, $X_1$, whenever `nesting_ids` are included in `product_data`, $\rho$ is treated as a nonlinear $X_2$ parameter. This means that the linear component is given instead by$$\log s_{jt} - \log s_{0t} - \rho \log s_{j|h(j)t} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}.$$This is done for two reasons:1. It forces the user to treat $\rho$ as an endogenous parameter.2. It extends much more easily to the RCNL model of :ref:`references:Brenkers and Verboven (2006)`.A common choice for an additional instrument is the number of products per nest. Application of Nested LogitBy including `nesting_ids` (another reserved name) as a field in `product_data`, we tell the package to estimate a nested logit model, and we don't need to change any of the formulas. We show how to construct the category groupings in two different ways:1. We put all products in a single nest (only the outside good in the other nest).2. We put products into two nests (either mushy or non-mushy).We also construct an additional instrument based on the number of products per nest. Typically this is useful as a source of exogenous variation in the within group share $\ln s_{j|h(j)t}$. However, in this example because the number of products per nest does not vary across markets, if we include product fixed effects, this instrument is irrelevant.We'll define a function that constructs the additional instrument and solves the nested logit problem. We'll exclude product ID fixed effects, which are collinear with `mushy`, and we'll choose $\rho = 0.7$ as the initial value at which the optimization routine will start.
###Code
def solve_nl(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl_formulation = pyblp.Formulation('0 + prices')
problem = pyblp.Problem(nl_formulation, df)
return problem.solve(rho=0.7)
###Output
_____no_output_____
###Markdown
First, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
df1 = product_data.copy()
df1['nesting_ids'] = 1
nl_results1 = solve_nl(df1)
nl_results1
###Output
_____no_output_____
###Markdown
When we inspect the :class:`Problem`, the only changes from the plain logit model is the additional instrument that contributes to $M_D$ and the inclusion of $H$, the number of nesting categories.
###Code
nl_results1.problem
###Output
_____no_output_____
###Markdown
Next, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
df2 = product_data.copy()
df2['nesting_ids'] = df2['mushy']
nl_results2 = solve_nl(df2)
nl_results2
###Output
_____no_output_____
###Markdown
For both cases we find that $\hat{\rho} > 0.8$.Finally, we'll also look at the adjusted parameter on prices, $\alpha / (1-\rho)$.
###Code
nl_results1.beta[0] / (1 - nl_results1.rho)
nl_results2.beta[0] / (1 - nl_results2.rho)
###Output
_____no_output_____
###Markdown
Treating Within Group Shares as ExogenousThe package is designed to prevent the user from treating the within group share, $\log s_{j|h(j)t}$, as an exogenous variable. For example, if we were to compute a `group_share` variable and use the algebraic functionality of :class:`Formulation` by including the expression `log(shares / group_share)` in our formula for $X_1$, the package would raise an error because the package knows that `shares` should not be included in this formulation.To demonstrate why this is a bad idea, we'll override this feature by calculating $\log s_{j|h(j)t}$ and including it as an additional variable in $X_1$. To do so, we'll first re-define our function for setting up and solving the nested logit problem.
###Code
def solve_nl2(df):
groups = df.groupby(['market_ids', 'nesting_ids'])
df['group_share'] = groups['shares'].transform(np.sum)
df['within_share'] = df['shares'] / df['group_share']
df['demand_instruments20'] = groups['shares'].transform(np.size)
nl2_formulation = pyblp.Formulation('0 + prices + log(within_share)')
problem = pyblp.Problem(nl2_formulation, df.drop(columns=['nesting_ids']))
return problem.solve()
###Output
_____no_output_____
###Markdown
Again, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest.
###Code
nl2_results1 = solve_nl2(df1)
nl2_results1
###Output
_____no_output_____
###Markdown
And again, we'll solve the problem when there are two nests for mushy and non-mushy.
###Code
nl2_results2 = solve_nl2(df2)
nl2_results2
###Output
_____no_output_____
###Markdown
One can observe that we obtain parameter estimates which are quite different than above.
###Code
nl2_results1.beta[0] / (1 - nl2_results1.beta[1])
nl2_results2.beta[0] / (1 - nl2_results2.beta[1])
###Output
_____no_output_____ |
tutorials/experiments/02_experiments_sat.ipynb | ###Markdown
This notebook experiments with different smoother approximations of RELU as proposed in the paper - [Smooth Adversarial Training](https://arxiv.org/abs/2006.14536). The authors show that RELU hurts the adversarial robustness of models and if it is replaced with its smoother approximations like Swish, GELU, Parametric SoftPlus (proposed in the same paper) then the adversarial robustness is enhanced greatly - The authors attribute this performance boost boost due to the fact that smoother activation functions help in producing more informed gradients that, in turn, help to create harder adversarial examples *during* training. So, we end up training our model to be robust against harder adversarial examples chances. This is desirable for many practical purposes. For the purpose of this notebook we will using GELU, and Swish which are available via TensorFlow core. Here's a figure from the same paper depicting the forward and backward nature of smoother activation functions - **Note** the notebook uses code from [this tutorial](https://www.tensorflow.org/neural_structured_learning/tutorials/adversarial_keras_cnn_mnist). Initial Setup
###Code
!pip install -q tf-nightly # `tf-nightly` because of gelu and swish
!pip install -q neural-structured-learning
import matplotlib.pyplot as plt
import neural_structured_learning as nsl
import numpy as np
import tensorflow as tf
tf.get_logger().setLevel('INFO')
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
print("TensorFlow version:", tf.__version__)
###Output
TensorFlow version: 2.5.0-dev20201104
###Markdown
Define Hyperparameters
###Code
class HParams(object):
def __init__(self):
self.input_shape = [28, 28, 1]
self.num_classes = 10
self.conv_filters = [32, 64, 64]
self.kernel_size = (3, 3)
self.pool_size = (2, 2)
self.num_fc_units = [64]
self.batch_size = 32
self.epochs = 5
self.adv_multiplier = 0.2
self.adv_step_size = 0.2
self.adv_grad_norm = 'infinity'
HPARAMS = HParams()
###Output
_____no_output_____
###Markdown
FashionMNIST Dataset
###Code
datasets = tfds.load('fashion_mnist')
train_dataset = datasets['train']
test_dataset = datasets['test']
IMAGE_INPUT_NAME = 'image'
LABEL_INPUT_NAME = 'label'
def normalize(features):
features[IMAGE_INPUT_NAME] = tf.cast(
features[IMAGE_INPUT_NAME], dtype=tf.float32) / 255.0
return features
def convert_to_tuples(features):
return features[IMAGE_INPUT_NAME], features[LABEL_INPUT_NAME]
def convert_to_dictionaries(image, label):
return {IMAGE_INPUT_NAME: image, LABEL_INPUT_NAME: label}
train_dataset = train_dataset.map(normalize).shuffle(10000).batch(HPARAMS.batch_size).map(convert_to_tuples)
test_dataset = test_dataset.map(normalize).batch(HPARAMS.batch_size).map(convert_to_tuples)
###Output
_____no_output_____
###Markdown
Model Utils
###Code
def build_base_model(hparams, activation="relu"):
"""Builds a model according to the architecture defined in `hparams`."""
inputs = tf.keras.Input(
shape=hparams.input_shape, dtype=tf.float32, name=IMAGE_INPUT_NAME)
x = inputs
for i, num_filters in enumerate(hparams.conv_filters):
x = tf.keras.layers.Conv2D(
num_filters, hparams.kernel_size, activation=activation)(
x)
if i < len(hparams.conv_filters) - 1:
# max pooling between convolutional layers
x = tf.keras.layers.MaxPooling2D(hparams.pool_size)(x)
x = tf.keras.layers.Flatten()(x)
for num_units in hparams.num_fc_units:
x = tf.keras.layers.Dense(num_units, activation=activation)(x)
pred = tf.keras.layers.Dense(hparams.num_classes, activation='softmax')(x)
model = tf.keras.Model(inputs=inputs, outputs=pred)
return model
base_model = build_base_model(HPARAMS)
base_model.summary()
###Output
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
image (InputLayer) [(None, 28, 28, 1)] 0
_________________________________________________________________
conv2d (Conv2D) (None, 26, 26, 32) 320
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 64) 18496
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 3, 3, 64) 36928
_________________________________________________________________
flatten (Flatten) (None, 576) 0
_________________________________________________________________
dense (Dense) (None, 64) 36928
_________________________________________________________________
dense_1 (Dense) (None, 10) 650
=================================================================
Total params: 93,322
Trainable params: 93,322
Non-trainable params: 0
_________________________________________________________________
###Markdown
Train Baseline Model and EvaluationLet's start with our baseline model that include RELU as its primary non-linearity.
###Code
base_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['acc'])
base_model.fit(train_dataset, epochs=HPARAMS.epochs)
results = base_model.evaluate(test_dataset)
relu_named_results = dict(zip(base_model.metrics_names, results))
print('\naccuracy:', relu_named_results['acc'])
###Output
313/313 [==============================] - 4s 12ms/step - loss: 0.2657 - acc: 0.9062
accuracy: 0.9061999917030334
###Markdown
GELU Model
###Code
gelu_model = build_base_model(HPARAMS, tf.nn.gelu)
gelu_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['acc'])
gelu_model.fit(train_dataset, epochs=HPARAMS.epochs)
results = gelu_model.evaluate(test_dataset)
gelu_named_results = dict(zip(gelu_model.metrics_names, results))
print('\naccuracy:', gelu_named_results['acc'])
###Output
313/313 [==============================] - 4s 13ms/step - loss: 0.2757 - acc: 0.9026
accuracy: 0.9025999903678894
###Markdown
Swish Model
###Code
swish_model = build_base_model(HPARAMS, tf.nn.gelu)
swish_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['acc'])
swish_model.fit(train_dataset, epochs=HPARAMS.epochs)
results = swish_model.evaluate(test_dataset)
swish_named_results = dict(zip(swish_model.metrics_names, results))
print('\naccuracy:', swish_named_results['acc'])
###Output
313/313 [==============================] - 4s 12ms/step - loss: 0.2721 - acc: 0.9066
accuracy: 0.9065999984741211
###Markdown
We see all the three models yielding similar results. Now, we are interested to see how harder adversarial examples each of these models can produce. Adversarially Fooling the ModelsTo do this, we first create a configuration for producing adversarial pertubations and then we use that to wrap our models with `nsl.keras.AdversarialRegularization` .
###Code
adv_config = nsl.configs.make_adv_reg_config(
multiplier=HPARAMS.adv_multiplier,
adv_step_size=HPARAMS.adv_step_size,
adv_grad_norm=HPARAMS.adv_grad_norm
)
def get_reference_model(reference_model):
reference_model = nsl.keras.AdversarialRegularization(
base_model,
label_keys=[LABEL_INPUT_NAME],
adv_config=adv_config)
reference_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['acc'])
return reference_model
###Output
_____no_output_____
###Markdown
`nsl` expects the inputs to be in a dictionary format - `{'image': image, 'label': label}` for example.
###Code
train_set_for_adv_model = train_dataset.map(convert_to_dictionaries)
test_set_for_adv_model = test_dataset.map(convert_to_dictionaries)
###Output
_____no_output_____
###Markdown
Now, we evaluate these different models under adversarial robustness.
###Code
def benchmark_model(reference_model, models_to_eval):
perturbed_images, labels, predictions = [], [], []
metrics = {
name: tf.keras.metrics.SparseCategoricalAccuracy()
for name in models_to_eval.keys()
}
for batch in test_set_for_adv_model:
perturbed_batch = reference_model.perturb_on_batch(batch)
# Clipping makes perturbed examples have the same range as regular ones.
perturbed_batch[IMAGE_INPUT_NAME] = tf.clip_by_value(
perturbed_batch[IMAGE_INPUT_NAME], 0.0, 1.0)
y_true = perturbed_batch.pop(LABEL_INPUT_NAME)
perturbed_images.append(perturbed_batch[IMAGE_INPUT_NAME].numpy())
labels.append(y_true.numpy())
predictions.append({})
for name, model in models_to_eval.items():
y_pred = model(perturbed_batch)
metrics[name](y_true, y_pred)
predictions[-1][name] = tf.argmax(y_pred, axis=-1).numpy()
for name, metric in metrics.items():
print('%s model accuracy: %f' % (name, metric.result().numpy()))
# We take the RELU model to create adversarial examples first,
# then use that model to evaluate on the adversarial examples
relu_adv_model = get_reference_model(base_model)
models_to_eval = {
'relu': base_model,
}
benchmark_model(relu_adv_model, models_to_eval)
# We take the GELU model to create adversarial examples first,
# then use that model to evaluate on the adversarial examples
gelu_adv_model = get_reference_model(gelu_model)
models_to_eval = {
'gelu': gelu_model,
}
benchmark_model(gelu_adv_model, models_to_eval)
# We take the Swish model to create adversarial examples first,
# then use that model to evaluate on the adversarial examples
swish_adv_model = get_reference_model(swish_model)
models_to_eval = {
'swish': swish_model,
}
benchmark_model(swish_adv_model, models_to_eval)
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int64
###Markdown
Notice that the RELU model fails considerably compared to the GELU and Swish models in terms of validation accuracy. Next we are going to use the Swish model (you can use GELU model too) to generate the adversarial examples and we will use the RELU model to evaluate it on those examples.
###Code
swish_adv_model = get_reference_model(swish_model)
models_to_eval = {
'relu': base_model,
}
benchmark_model(swish_adv_model, models_to_eval)
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int64
###Markdown
Let's now see what happens if we swap the models i.e. use the RELU model to generate the adversarial examples and use the Swish model for evaluation.
###Code
relu_adv_model = get_reference_model(base_model)
models_to_eval = {
'swish': swish_model,
}
benchmark_model(relu_adv_model, models_to_eval)
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int64
###Markdown
This indeed suggests that the Swish model is able to produce harder adversarial examples than the RELU model. Adversarial Training with SwishWe now train the Swish model with adversarial regularization.
###Code
swish_adv_model = build_base_model(HPARAMS, tf.nn.swish)
adv_model = nsl.keras.AdversarialRegularization(
swish_adv_model,
label_keys=[LABEL_INPUT_NAME],
adv_config=adv_config
)
adv_model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['acc'])
adv_model.fit(train_set_for_adv_model, epochs=HPARAMS.epochs)
###Output
Epoch 1/5
WARNING:tensorflow:The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.int64
###Markdown
We can now compare the performance of the Swish model and this adversarially regularized Swish model to see the benefits.
###Code
swish_ref_model = get_reference_model(swish_model)
models_to_eval = {
'swish': swish_model,
'swish-adv': adv_model.base_model
}
benchmark_model(swish_ref_model, models_to_eval)
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int64
|
tutorial/notebooks/.ipynb_checkpoints/Binary Choice Task - Person Identification in Video-checkpoint.ipynb | ###Markdown
CrowdTruth for Binary Choice Tasks: Person Identification in VideoIn this tutorial, we will apply CrowdTruth metrics to a **binary choice** crowdsourcing task for **Person Identification** in **video fragments**. The workers were asked to watch a short video fragment of about 3-5 seconds and then decide whether there is any *person* that appears in the video fragment. The task was executed on [FigureEight](https://www.figure-eight.com/). For more crowdsourcing annotation task examples, click [here](https://raw.githubusercontent.com/CrowdTruth-core/tutorial/getting_started.md).To replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here: [template](https://raw.githubusercontent.com/CrowdTruth/CrowdTruth-core/master/tutorial/templates/People-Video-Binary/template.html), [css](https://raw.githubusercontent.com/CrowdTruth/CrowdTruth-core/master/tutorial/templates/People-Video-Binary/template.css), [javascript](https://raw.githubusercontent.com/CrowdTruth/CrowdTruth-core/master/tutorial/templates/People-Video-Binary/template.js). This is a screenshot of the task as it appeared to workers:![Task Template](../img/person-video-binary.png) A sample dataset for this task is available in [this file](https://raw.githubusercontent.com/CrowdTruth/CrowdTruth-core/master/tutorial/data/person-video-binary-choice.csv), containing raw output from the crowd on FigureEight. Download the file and place it in a folder named `data` that has the same root as this notebook. Now you can check your data:
###Code
import pandas as pd
test_data = pd.read_csv("../data/person-video-binary-choice.csv")
test_data.head()
###Output
_____no_output_____
###Markdown
Declaring a pre-processing configurationThe pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class:
###Code
import crowdtruth
from crowdtruth.configuration import DefaultConfig
###Output
_____no_output_____
###Markdown
Our test class inherits the default configuration `DefaultConfig`, while also declaring some additional attributes that are specific to the Person Identification task:* **`inputColumns`:** list of input columns from the .csv file with the input data* **`outputColumns`:** list of output columns from the .csv file with the answers from the workers* **`open_ended_task`:** boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to `False`* **`annotation_vector`:** list of possible crowd answers, mandatory to declare when `open_ended_task` is `False`; for our task, this is a list containing `true` and `false` values* **`processJudgments`:** method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in `annotation_vector`The complete configuration class is declared below:
###Code
class TestConfig(DefaultConfig):
inputColumns = ["videolocation", "subtitles", "imagetags", "subtitletags"]
outputColumns = ["selected_answer"]
# processing of a closed task
open_ended_task = False
annotation_vector = ["yes", "no"]
def processJudgments(self, judgments):
# pre-process output to match the values in annotation_vector
for col in self.outputColumns:
# transform to lowercase
judgments[col] = judgments[col].apply(lambda x: str(x).lower())
return judgments
###Output
_____no_output_____
###Markdown
Pre-processing the input dataAfter declaring the configuration of our input file, we are ready to pre-process the crowd data:
###Code
data, config = crowdtruth.load(
file = "../data/person-video-binary-choice.csv",
config = TestConfig()
)
data['judgments'].head()
###Output
_____no_output_____
###Markdown
Computing the CrowdTruth metricsThe pre-processed data can then be used to calculate the CrowdTruth metrics:
###Code
results = crowdtruth.run(data, config)
###Output
_____no_output_____
###Markdown
`results` is a dict object that contains the quality metrics for the video fragments, annotations and crowd workers.The **video fragment metrics** are stored in `results["units"]`:
###Code
results["units"].head()
###Output
_____no_output_____
###Markdown
The `uqs` column in `results["units"]` contains the **video fragment quality scores**, capturing the overall workers agreement over each video fragment. Here we plot its histogram:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(results["units"]["uqs"])
plt.xlabel("Video Fragment Quality Score")
plt.ylabel("Video Fragment")
###Output
_____no_output_____
###Markdown
The `unit_annotation_score` column in `results["units"]` contains the **video fragment-annotation scores**, capturing the likelihood that an annotation is expressed in a video fragment. For each video fragment, we store a dictionary mapping each annotation to its video fragment-relation score.
###Code
results["units"]["unit_annotation_score"].head()
###Output
_____no_output_____
###Markdown
The **worker metrics** are stored in `results["workers"]`:
###Code
results["workers"].head()
###Output
_____no_output_____
###Markdown
The `wqs` columns in `results["workers"]` contains the **worker quality scores**, capturing the overall agreement between one worker and all the other workers.
###Code
plt.hist(results["workers"]["wqs"])
plt.xlabel("Worker Quality Score")
plt.ylabel("Workers")
###Output
_____no_output_____
###Markdown
The **annotation metrics** are stored in `results["annotations"]`. The `aqs` column contains the **annotation quality scores**, capturing the overall worker agreement over one relation.
###Code
results["annotations"]
###Output
_____no_output_____ |
Db2_11.5_Features/Db2_11.5_JSON_05_Inserting_JSON_Data.ipynb | ###Markdown
Storing JSON Documents in Db2Updated: 2019-09-14 Load Db2 Extensions and Connect to the DatabaseThe `connection` notebook contains the `CONNECT` statement which allows access to the `SAMPLE` database. If you need to modify the connection information, edit the `connection.ipynb` notebook.
###Code
%run ../db2.ipynb
%run ../connection.ipynb
###Output
_____no_output_____
###Markdown
Inserting and Retrieving JSON DocumentsInserting a JSON value into a Db2 table can be done through a variety of methods including `LOAD`. In the previous section, the Db2 `IMPORT` command was used to move character JSON data into a table. If the Db2 column has been defined as a character field, you can use the `INSERT` statement without any additional modification.
###Code
%%sql -q
DROP TABLE CUSTOMERS;
CREATE TABLE CUSTOMERS
(
CUSTOMER_ID INT,
CUSTOMER_INFO VARCHAR(2000)
);
INSERT INTO CUSTOMERS VALUES
(
1,
'{"customerid": 100001,
"identity":
{
"firstname": "Kelly",
"lastname" : "Gilmore",
"birthdate": "1973-08-25"
}
}'
);
###Output
_____no_output_____
###Markdown
JSON_TO_BSON and BSON_TO_JSONIf you decide to store the data in binary format, you must use the `JSON_TO_BSON` function to convert the JSON into the proper format. You also have the option of using an external BSON library to convert the string and insert the value directly into the column (i.e. Db2 is not involved in the conversion).
###Code
%%sql -q
DROP TABLE CUSTOMERS;
CREATE TABLE CUSTOMERS
(
CUSTOMER_ID INT,
CUSTOMER_INFO VARBINARY(2000)
);
INSERT INTO CUSTOMERS VALUES
(
1,
JSON_TO_BSON('{"customerid": 100001,
"identity":
{
"firstname": "Kelly",
"lastname" : "Gilmore",
"birthdate": "1973-08-25"
}
}')
);
###Output
_____no_output_____
###Markdown
To retrieve an entire JSON document from a character field, you can use a standard `SELECT` statement. If the field is in BSON format, you must use the `BSON_TO_JSON` function to have it converted back into a readable format.
###Code
%sql -j SELECT BSON_TO_JSON(CUSTOMER_INFO) FROM CUSTOMERS
###Output
_____no_output_____
###Markdown
Retrieving the data requires the use of the `BSON_TO_JSON` function to convert it back to a text format. Invalid JSON DetectionOne of the advantages of using the new Db2 JSON functions is that you can store the data as either character (JSON) strings, or as binary (BSON) data. However, if you insert a document as a JSON character string, no checking will be done against the validity of the document until you attempt to use a JSON function against it. The following example attempts to retrieve the name field from a JSON document:
###Code
%sql VALUES JSON_VALUE('{"name": George}','$.name')
###Output
_____no_output_____
###Markdown
From a JSON format perspective, this should fail as the value `George` is not quoted and is also not a valid number. Surprisingly, the result of the above statement will be the `NULL` value which will seem wrong at first until you realize that the default error handling clause for any ISO JSON statement is to return a `null` by default. If a document needs to be checked for validity during insert, then the `JSON_TO_BSON` function can be used. The following example uses the `VALUES` clause to generate an error on an invalid JSON document.
###Code
%sql VALUES JSON_TO_BSON('{"name": George}');
###Output
_____no_output_____
###Markdown
The Db2 `JSON_TO_BSON` function will check the structure of the JSON document to ensure it is in the proper format. You can write a simple function that can be used to check whether or not a character string is valid JSON:
###Code
%%sql -d
CREATE OR REPLACE FUNCTION CHECK_JSON(JSON CLOB)
RETURNS INTEGER
CONTAINS SQL LANGUAGE SQL
DETERMINISTIC
NO EXTERNAL ACTION
BEGIN
DECLARE RC BOOLEAN;
DECLARE EXIT HANDLER FOR SQLEXCEPTION RETURN(FALSE);
SET RC = JSON_EXISTS(JSON,'$' ERROR ON ERROR);
RETURN(TRUE);
END
###Output
_____no_output_____
###Markdown
The SQL to check the previous string would look like this:
###Code
%%sql
VALUES
CASE CHECK_JSON('{"name": George}')
WHEN FALSE THEN 'Bad JSON'
WHEN TRUE THEN 'Okay JSON'
END;
###Output
_____no_output_____
###Markdown
The function can be incorporated into a table definition as part of a check constraint.
###Code
%%sql -q
DROP TABLE TESTJSON;
CREATE TABLE TESTJSON
(
JSON_IN VARCHAR(1000) CONSTRAINT CHECK_JSON CHECK(CHECK_JSON(JSON_IN))
);
###Output
_____no_output_____
###Markdown
Attempting to insert an invalid JSON document would result in the following error message being returned:
###Code
%sql INSERT INTO TESTJSON VALUES '{"name": George}';
###Output
_____no_output_____ |
Capstone_Modelling.ipynb | ###Markdown
Questions: 1. Try these things to understand data in more details2. How many articles are there for Palestine and Microsoft each?3. What is hour of publication for each article.4. How many articles are there, where status of all social media platform is greater than zero? How many Microsoft and how many are Palestine.5. How sentimentTitle and Sentiment headings are spread?6. Try to summarize to article and headlines. (We'll discuss this in details in this week)7. For each topic (microsoft and palestine), how many news article are present in their individual social media platform file. 8. How are the data spread in separate file for each social media platform file for articles with Facebook/Googleplus/Linkedin values are -1.9. How are the data spread in separate file for each social media platform file for articles with Facebook/Googleplus/Linkedin values are -1.10. What meaning is coming out of Facebook/Googleplus/Linkedin columns with columns -1 and 0
###Code
df = pd.read_csv('News_Final.csv')
df.shape
df.head()
###Output
_____no_output_____
###Markdown
EDA
###Code
df.isnull().sum()
df['Source'].value_counts()
###Output
_____no_output_____
###Markdown
Drop the rows containing Obama and Economy as per objective we need only Microsoft and Palestine
###Code
df = df.drop(df[df.Topic == 'obama'].index)
df = df.drop(df[df.Topic == 'economy'].index)
df.head()
df.shape
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Fill the Null values
###Code
df.Source.fillna(df.Source.mode()[0],inplace = True) #Replace the null values of the Source with the mode
df.isnull().sum()
df.info()
df.describe().T
df['Topic'].value_counts()
max_sen_title = df.groupby(['Title','SentimentTitle'], as_index=False).max()
###Output
_____no_output_____
###Markdown
High to Low SentimentTitle score based on Source
###Code
df[['Source','SentimentTitle']].groupby('Source').agg('median').sort_values(by='SentimentTitle',ascending=False).head()
df[['Topic','SentimentTitle','SentimentHeadline']].groupby('Topic').agg('mean').sort_values(by='SentimentTitle',ascending=False)
###Output
_____no_output_____
###Markdown
Convert Published Date to Datetime
###Code
df['Date'] = pd.to_datetime(df['PublishDate'])
df['Date'].min(),df['Date'].max()
df.head()
df['month'] = df['Date'].dt.month
df['day'] = df['Date'].dt.day
df['year'] = df['Date'].dt.year
df['month_name'] = df['Date'].dt.strftime('%b')
df['weekday'] = df['Date'].dt.strftime('%A')
df['D_T_Y'] = df.Date.map(lambda x: x.strftime('%Y-%m-%d'))
df.head()
df.shape
df.day
###Output
_____no_output_____
###Markdown
Monday is having the Highest number of News Published
###Code
df.weekday.value_counts()
###Output
_____no_output_____
###Markdown
March is having the Highest number of News Published, while it is coming 2 times in 2015 and 2016.
###Code
df.month_name.value_counts()
###Output
_____no_output_____
###Markdown
News published based on Month and Topic
###Code
df[['month_name','Topic','IDLink']].groupby(['month_name','Topic']).agg('count').sort_values(by='IDLink',ascending=False)
df['GooglePlus'].value_counts().head()
df['LinkedIn'].value_counts().head()
df['Facebook'].value_counts().head()
###Output
_____no_output_____
###Markdown
Making of wordCloud from Title
###Code
import nltk
stopwords = nltk.corpus.stopwords.words('english')
stopwords.extend(['Palestinian','Palestine','Microsoft'])
import nltk
from wordcloud import WordCloud
plt.figure(figsize=(12,6))
text = ' '.join(df.Title[df['Topic']=='palestine'])
wc = WordCloud(background_color='white',stopwords=stopwords).generate(text)
plt.imshow(wc)
plt.figure(figsize=(12,6))
text = ' '.join(df.Title[df['Topic']=='microsoft'])
wc = WordCloud(background_color='white',stopwords=stopwords).generate(text)
plt.imshow(wc)
###Output
_____no_output_____
###Markdown
Making the WordCloud of Headlines
###Code
plt.figure(figsize=(12,6))
Headline = df.Headline[df['Topic']=='microsoft']
values = ','.join(map(str,Headline)) #Doing this step, otherwise it is giving the error
wc = WordCloud(background_color='white',stopwords=stopwords).generate(values)
plt.imshow(wc)
plt.figure(figsize=(12,6))
Headline = df.Headline[df['Topic']=='palestine']
values = ','.join(map(str,Headline)) #Doing this step, otherwise it is giving the error
wc = WordCloud(background_color='white',stopwords=stopwords).generate(values)
plt.imshow(wc)
###Output
_____no_output_____
###Markdown
Text Cleaning
###Code
df.Headline = df.Headline.astype('str')
docs = df['Headline'].str.lower().str.replace('[^a-z@# ]','')
stopwords = nltk.corpus.stopwords.words('english')
#stopwords.extend(['amp','rt'])
stemmer = nltk.stem.PorterStemmer()
def clean_sentence(text):
words = text.split(' ')
words_clean = [stemmer.stem(w) for w in words if w not in stopwords]
return ' '.join(words_clean)
docs_clean = docs.apply(clean_sentence)
docs_clean.head()
df.dtypes
###Output
_____no_output_____
###Markdown
Document Term Matrix
###Code
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
vectorizer.fit(docs_clean)
dtm = vectorizer.transform(docs_clean)
dtm
df_dtm = pd.DataFrame(dtm.toarray(),
columns=vectorizer.get_feature_names())
df_dtm
###Output
_____no_output_____
###Markdown
Creating Bag of words analysis combined
###Code
%matplotlib inline
df_dtm.sum().sort_values(ascending=False).head(20).plot.bar(color='steelblue',figsize=(12,5))
###Output
_____no_output_____
###Markdown
Creating Bag of Words seperately of Microsoft and Palestine
###Code
ndf1 = df[df['Topic']=='palestine']
ndf2 = df[df['Topic']=='microsoft']
###Output
_____no_output_____
###Markdown
Palestine
###Code
ndf1.Headline = ndf1.Headline.astype('str')
docs = ndf1['Headline'].str.lower().str.replace('[^a-z@# ]','')
stopwords = nltk.corpus.stopwords.words('english')
stopwords.extend(['palestine','palestinian'])
stemmer = nltk.stem.PorterStemmer()
def clean_sentence(text):
words = text.split(' ')
words_clean = [stemmer.stem(w) for w in words if w not in stopwords]
return ' '.join(words_clean)
docs_clean = docs.apply(clean_sentence)
docs_clean.head()
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
vectorizer.fit(docs_clean)
dtm = vectorizer.transform(docs_clean)
dtm
df_dtm = pd.DataFrame(dtm.toarray(),
columns=vectorizer.get_feature_names())
df_dtm
%matplotlib inline
df_dtm.sum().sort_values(ascending=False).head(20).plot.bar(color='steelblue',figsize=(12,5))
###Output
_____no_output_____
###Markdown
Microsoft
###Code
ndf2.Headline = ndf2.Headline.astype('str')
docs = ndf2['Headline'].str.lower().str.replace('[^a-z@# ]','')
stopwords = nltk.corpus.stopwords.words('english')
stopwords.extend(['microsoft'])
stemmer = nltk.stem.PorterStemmer()
def clean_sentence(text):
words = text.split(' ')
words_clean = [stemmer.stem(w) for w in words if w not in stopwords]
return ' '.join(words_clean)
docs_clean = docs.apply(clean_sentence)
docs_clean.head()
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
vectorizer.fit(docs_clean)
dtm = vectorizer.transform(docs_clean)
dtm
df_dtm = pd.DataFrame(dtm.toarray(),
columns=vectorizer.get_feature_names())
df_dtm
%matplotlib inline
df_dtm.sum().sort_values(ascending=False).head(20).plot.bar(color='steelblue',figsize=(12,5))
###Output
_____no_output_____
###Markdown
Ques1. How many articles are for Palestine and Microsoft each?
###Code
df.Topic.value_counts()
###Output
_____no_output_____
###Markdown
Ques2. What is our of publication of each article?
###Code
df['Date'] = pd.to_datetime(df['PublishDate'])
def hr_func(ts):
return ts.hour
df['time_hour'] = df['Date'].apply(hr_func)
df.head()
###Output
_____no_output_____
###Markdown
Ques3. How many articles are there, where status of all social media platform is greater than zero? How many Microsoft and how many are Palestine.
###Code
GT0 = df.loc[((df.Facebook>0) & (df.LinkedIn>0) & (df.GooglePlus>0)),:]
GT0
GT0P = GT0[GT0['Topic']=='palestine']
GT0P.Facebook.value_counts().sum()
GT0M = GT0[GT0['Topic']=='microsoft']
GT0M.Facebook.value_counts().sum()
###Output
_____no_output_____
###Markdown
For Individual Condition
###Code
GT0f = df.loc[(df.Facebook>0),:]
GT0f.Facebook.value_counts().sum()
GT0g = df.loc[(df.GooglePlus>0),:]
GT0g.GooglePlus.value_counts().sum()
GT0l = df.loc[(df.LinkedIn>0),:]
GT0l.LinkedIn.value_counts().sum()
###Output
_____no_output_____
###Markdown
Microsoft is having 7084 rows and Palestine is having 736 rows Ques4. How SentimentTitle and Sentiment Headings are spread?
###Code
df.hist(column='SentimentTitle')
df.hist(column='SentimentHeadline')
###Output
_____no_output_____
###Markdown
New DataSet (Reqd. Dates)
###Code
df_new = df[(df['D_T_Y'] > '2015-11-01') & (df['D_T_Y'] < '2016-07-07')]
df_new.shape
df.shape
###Output
_____no_output_____
###Markdown
Topic Modelling
###Code
import gensim
import nltk
df_new.head()
###Output
_____no_output_____
###Markdown
Topic Modelling for Palestine
###Code
data=df_new[df_new['Topic']=='palestine']
docs=data['Title'].fillna('').str.lower()
docs=docs.str.replace('[^a-z ]','')
docs.head()
stopwords=nltk.corpus.stopwords.words('english')
stopwords.extend(['use','','will','one','good'])
stemmer=nltk.stem.PorterStemmer()
docs_clean=[]
for doc in docs:
words=doc.split(' ')
words_clean= [stemmer.stem(word) for word in words if word not in stopwords]
words_clean=[word for word in words_clean if word not in stopwords]
docs_clean.append(words_clean)
dictionary = gensim.corpora.Dictionary(docs_clean)
# bag of words
docs_bow=[]
for doc in docs_clean:
bow=dictionary.doc2bow(doc)
docs_bow.append(bow)
lda_model=gensim.models.LdaMulticore(docs_bow,id2word=dictionary,num_topics=10,random_state=500)
###Output
_____no_output_____
###Markdown
Document to Term Relationship
###Code
lda_model.get_document_topics(docs_bow[1])
new_df=pd.DataFrame(lda_model.get_document_topics(docs_bow[1]),columns=['topics','probs'])
new_df.sort_values(by='probs').iloc[-1]['topics']
new_df.sort_values(by='probs')
topics=[]
for doc in docs_bow:
new_df=pd.DataFrame(lda_model.get_document_topics(doc),columns=['topics','probs'])
topic=new_df.sort_values(by='probs').iloc[-1]['topics']
topics.append(topic)
lda_model.print_topics()
# coherence
from gensim.models.coherencemodel import CoherenceModel
c_scores=[]
for i in range(4,20):
lda_model=gensim.models.LdaMulticore(docs_bow,id2word=dictionary,num_topics=i,random_state=100,iterations=300)
coher_model=CoherenceModel(lda_model,corpus=docs_bow,coherence='u_mass')
score=coher_model.get_coherence()
c_scores.append(score)
plt.plot(c_scores)
plt.show()
###Output
_____no_output_____
###Markdown
Topic Modelling for Microsoft
###Code
data=df_new[df_new['Topic']=='microsoft']
docs=data['Title'].fillna('').str.lower()
docs=docs.str.replace('[^a-z ]','')
docs.head()
stopwords=nltk.corpus.stopwords.words('english')
stopwords.extend(['use','','will','one','good'])
stemmer=nltk.stem.PorterStemmer()
docs_clean=[]
for doc in docs:
words=doc.split(' ')
words_clean= [stemmer.stem(word) for word in words if word not in stopwords]
words_clean=[word for word in words_clean if word not in stopwords]
docs_clean.append(words_clean)
dictionary = gensim.corpora.Dictionary(docs_clean)
# bag of words
docs_bow=[]
for doc in docs_clean:
bow=dictionary.doc2bow(doc)
docs_bow.append(bow)
lda_model=gensim.models.LdaMulticore(docs_bow,id2word=dictionary,num_topics=10,random_state=500)
new_df=pd.DataFrame(lda_model.get_document_topics(docs_bow[1]),columns=['topics','probs'])
new_df.sort_values(by='probs').iloc[-1]['topics']
topics=[]
for doc in docs_bow:
new_df=pd.DataFrame(lda_model.get_document_topics(doc),columns=['topics','probs'])
topic=new_df.sort_values(by='probs').iloc[-1]['topics']
topics.append(topic)
#data['topics']=topics
lda_model.print_topics()
# coherence
from gensim.models.coherencemodel import CoherenceModel
c_scores=[]
for i in range(4,20):
lda_model=gensim.models.LdaMulticore(docs_bow,id2word=dictionary,num_topics=i,random_state=100,iterations=300)
coher_model=CoherenceModel(lda_model,corpus=docs_bow,coherence='u_mass')
score=coher_model.get_coherence()
c_scores.append(score)
plt.plot(c_scores)
plt.show()
###Output
_____no_output_____
###Markdown
Regression
###Code
GT0 = df.loc[((df.Facebook>0) & (df.LinkedIn>0) & (df.GooglePlus>0)),:]
GT0.head()
GT0.columns
X = GT0[['SentimentTitle','SentimentHeadline','Facebook','GooglePlus','LinkedIn']]
y = GT0[['SentimentHeadline']]
x_train, x_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=2)
model=LinearRegression()
model.fit(x_train,y_train)
ypred = model.predict(x_test)
from sklearn.metrics import r2_score
r2_score(y_test, ypred)
x_train, x_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=2)
model=LinearRegression()
model.fit(x_train,y_train)
model.score(x_test,y_test)
###Output
_____no_output_____
###Markdown
By using DTM Matrix
###Code
GT0.Topic.value_counts()
from nltk.stem import PorterStemmer
from sklearn.feature_extraction.text import CountVectorizer
stemmer = PorterStemmer()
custom_stop_words = ['microsoft','palestine']
common_stop_words = nltk.corpus.stopwords.words('english')
stop_words_all = np.hstack([custom_stop_words, common_stop_words])
len(stop_words_all)
docs = GT0['Headline']
docs = docs.str.lower()
docs = docs.str.replace('[^a-z#@ ]', '')
docs = docs.str.split(' ')
words_rows = docs.tolist()
words_all = []
words_rows_clean = []
docs_clean = []
for row in words_rows:
row_words = [stemmer.stem(word) for word in row if word not in stop_words_all]
words_rows_clean.append(row_words)
docs_clean.append(' '.join(row_words))
words_all.extend(row_words)
model_dtm = CountVectorizer()
sparse_matrix = model_dtm.fit_transform(docs_clean)
dtm = pd.DataFrame(sparse_matrix.toarray(),
columns=model_dtm.get_feature_names())
dtm.shape
from sklearn.model_selection import train_test_split
train_x, test_x = train_test_split(dtm, test_size=0.3, random_state=0)
train_y = GT0.iloc[train_x.index]['SentimentHeadline']
test_y = GT0.iloc[test_x.index]['SentimentHeadline']
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(train_x,train_y)
ypred = model.predict(test_x)
r2_score(test_y, ypred)
###Output
_____no_output_____
###Markdown
XG-Boost
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import VotingClassifier
import xgboost as xgb
from scipy.stats import randint as sp_randint
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import train_test_split
from sklearn import model_selection
from sklearn.utils import resample
from statsmodels.api import add_constant
import statsmodels.discrete.discrete_model as smt
import seaborn as sns
from imblearn.over_sampling import SMOTE
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn import metrics
import xgboost as xgb
from scipy.stats import randint as sp_randint
xgr = xgb.XGBRegressor()
param = {'n_estimators': sp_randint(1, 80)}
randomCV = RandomizedSearchCV(xgr, param_distributions=param, n_iter=80)
randomCV.fit(X,y)
randomCV.best_params_
xgr = xgb.XGBRegressor(n_estimators=78)
xgr.fit(x_train, y_train)
xgr.score(x_test,y_test)
###Output
_____no_output_____
###Markdown
TF-IDF
###Code
GT0.head()
GT0.Topic.value_counts()
GT0 = df[(df.Facebook>0)]
GT0=GT0[GT0['Topic']=='microsoft']
docs = GT0['Headline'].str.lower().str.replace('[^a-z ]','')
stopwords = nltk.corpus.stopwords.words('english')
stopwords.extend(['palestine','microsoft'])
stemmer = nltk.stem.PorterStemmer()
def clean_sentence(text):
words = text.split(' ')
words_clean = [stemmer.stem(w) for w in words if w not in stopwords]
return ' '.join(words_clean)
docs_clean = docs.apply(clean_sentence)
GT0.shape
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
tf_idf_vectorizer = CountVectorizer(stop_words=stopwords, ngram_range=(1,1))
document_term_matrix = tf_idf_vectorizer.fit_transform(docs_clean)
#len(tf_idf_vectorizer.get_feature_names())
document_term_matrix.toarray().shape
#pd.DataFrame(document_term_matrix.toarray(),
#columns = tf_idf_vectorizer.get_feature_names(), )
temp = pd.DataFrame(document_term_matrix.toarray(),columns=tf_idf_vectorizer.get_feature_names())
GT0['Avg-TFIDF'] = temp.mean(axis=1)
GT0['Sum-TFIDF'] = temp.sum(axis=1)
#GT0.head()
#columns_list = ['SentimentTitle','SentimentHeadline','month','day','year','time_hour','Avg-TFIDF','Sum-TFIDF']
X = temp
y = GT0['Facebook']
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=100)
X.head()
X.shape
GT0[X[X['zuckerberg']==1].index]
X.shape
y.shape
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(X_train,y_train)
r2_score(y_train, model.predict(X_train))
#GT0 = GT0[GT0['Facebook']<=8000]
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
GT0['Source_Encoded'] = le.fit_transform(GT0['Source'])
GT0.Source.value_counts()
#GT0 = GT0[GT0.Source=='WinBeta']
#GT0.shape
GT0.head()
GT0.isnull().sum()
GT0.fillna(0,inplace=True)
columns_list = ['SentimentTitle','SentimentHeadline','month','day','year','time_hour','Avg-TFIDF','Sum-TFIDF']
X = GT0[columns_list]
y = GT0['Facebook']
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=100)
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(X_train,y_train)
r2_score(y_train, model.predict(X_train))
X.head()
GT0 = df[(df.GooglePlus>0)]
GT0.shape
columns_list = ['SentimentTitle','SentimentHeadline','month','day','year','time_hour','Avg-TFIDF','Sum-TFIDF']
X = GT0[columns_list]
y = GT0['GooglePlus']
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=100)
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(X_train,y_train)
r2_score(y_train, model.predict(X_train))
GT0 = df[((df.Facebook>0) & (df.Facebook<8000))]
docs = GT0['Title'].str.lower().str.replace('[^a-z ]','')
stopwords = nltk.corpus.stopwords.words('english')
stopwords.extend(['palestine','microsoft'])
stemmer = nltk.stem.PorterStemmer()
def clean_sentence(text):
words = text.split(' ')
words_clean = [stemmer.stem(w) for w in words if w not in stopwords]
return ' '.join(words_clean)
docs_clean = docs.apply(clean_sentence)
from sklearn.model_selection import train_test_split
train, test = train_test_split(docs_clean,test_size=0.2,random_state=100)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
vectorizer.fit(train)
dtm_train = vectorizer.transform(train)
dtm_test = vectorizer.transform(test)
features = vectorizer.get_feature_names()
df_dtm_train = pd.DataFrame(dtm_train.toarray(),columns=features)
df_dtm_test = pd.DataFrame(dtm_test.toarray(),columns=features)
df_dtm_train.shape, df_dtm_test.shape
#train_y = GT0.loc[train.index]['Facebook']
#test_y = GT0.loc[test.index]['Facebook']
docs_clean.head()
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
vectorizer.fit(train)
dtm_train = vectorizer.transform(train)
dtm_test = vectorizer.transform(test)
features = vectorizer.get_feature_names()
df_dtm_train = pd.DataFrame(dtm_train.toarray(),columns=features)
df_dtm_test = pd.DataFrame(dtm_test.toarray(),columns=features)
df_dtm_train.shape, df_dtm_test.shape
train_y = GT0.loc[train.index]['Facebook']
test_y = GT0.loc[test.index]['Facebook']
df_dtm_train.head()
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(df_dtm_train,train_y)
ypred = model.predict(df_dtm_test)
r2_score(list(test_y), ypred)
ypred.shape
metrics.r2_score(test_y,ypred)
from sklearn import metrics
from sklearn.metrics import mean_squared_error
mean_squared_error(test_y, ypred)
docs = GT0['Headline'].str.lower().str.replace('[^a-z ]','')
stopwords = nltk.corpus.stopwords.words('english')
stopwords.extend(['palestine','microsoft'])
stemmer = nltk.stem.PorterStemmer()
def clean_sentence(text):
words = text.split(' ')
words_clean = [stemmer.stem(w) for w in words if w not in stopwords]
return ' '.join(words_clean)
docs_clean = docs.apply(clean_sentence)
from sklearn.model_selection import train_test_split
train, test = train_test_split(docs_clean,test_size=0.2,random_state=100)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
vectorizer.fit(train)
dtm_train = vectorizer.transform(train)
dtm_test = vectorizer.transform(test)
features = vectorizer.get_feature_names()
df_dtm_train = pd.DataFrame(dtm_train.toarray(),columns=features)
df_dtm_test = pd.DataFrame(dtm_test.toarray(),columns=features)
df_dtm_train.shape, df_dtm_test.shape
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
vectorizer.fit(train)
dtm_train = vectorizer.transform(train)
dtm_test = vectorizer.transform(test)
features = vectorizer.get_feature_names()
df_dtm_train = pd.DataFrame(dtm_train.toarray(),columns=features)
df_dtm_test = pd.DataFrame(dtm_test.toarray(),columns=features)
df_dtm_train.shape, df_dtm_test.shape
train_y = GT0.loc[train.index]['Facebook']
test_y = GT0.loc[test.index]['Facebook']
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(df_dtm_train,train_y)
ypred = model.predict(df_dtm_test)
r2_score(test_y, ypred)
mean_squared_error(test_y, ypred)
###Output
_____no_output_____ |
src/ipython/faster_rcnn/faster_rcnn_demo.ipynb | ###Markdown
You'll need to download pretrained model from [google dirve](https://drive.google.com/open?id=1cQ27LIn-Rig4-Uayzy_gH5-cW-NRGVzY) 1. model converted from chainer
###Code
# in this machine the cupy isn't install correctly...
# so it's a little slow
trainer.load('/home/cy/chainer_best_model_converted_to_pytorch_0.7053.pth')
opt.caffe_pretrain=True # this model was trained from caffe-pretrained model
_bboxes, _labels, _scores = trainer.faster_rcnn.predict(img,visualize=True)
vis_bbox(at.tonumpy(img[0]),
at.tonumpy(_bboxes[0]),
at.tonumpy(_labels[0]).reshape(-1),
at.tonumpy(_scores[0]).reshape(-1))
# it failed to find the dog, but if you set threshold from 0.7 to 0.6, you'll find it
###Output
/usr/local/lib/python3.5/dist-packages/chainer/cuda.py:84: UserWarning: cuDNN is not enabled.
Please reinstall CuPy after you install cudnn
(see https://docs-cupy.chainer.org/en/stable/install.html#install-cupy-with-cudnn-and-nccl).
'cuDNN is not enabled.\n'
###Markdown
2. model trained with torchvision pretrained model
###Code
trainer.load('/home/cy/fasterrcnn_12211511_0.701052458187_torchvision_pretrain.pth')
opt.caffe_pretrain=False # this model was trained from torchvision-pretrained model
_bboxes, _labels, _scores = trainer.faster_rcnn.predict(img,visualize=True)
vis_bbox(at.tonumpy(img[0]),
at.tonumpy(_bboxes[0]),
at.tonumpy(_labels[0]).reshape(-1),
at.tonumpy(_scores[0]).reshape(-1))
# it failed to find the dog, but if you set threshold from 0.7 to 0.6, you'll find it
###Output
_____no_output_____
###Markdown
3. model trained with caffe pretrained model
###Code
trainer.load('/home/cy/fasterrcnn_12222105_0.712649824453_caffe_pretrain.pth')
opt.caffe_pretrain=True # this model was trained from caffe-pretrained model
_bboxes, _labels, _scores = trainer.faster_rcnn.predict(img,visualize=True)
vis_bbox(at.tonumpy(img[0]),
at.tonumpy(_bboxes[0]),
at.tonumpy(_labels[0]).reshape(-1),
at.tonumpy(_scores[0]).reshape(-1))
plt.savefig('')
###Output
_____no_output_____ |
Udemy_Practise_Problem./02_E-Commerce Yearly amount spent prediction using Linear Regression..ipynb | ###Markdown
___ ___ Linear Regression ProjectCongratulations! You just got some contract work with an Ecommerce company based in New York City that sells clothing online but they also have in-store style and clothing advice sessions. Customers come in to the store, have sessions/meetings with a personal stylist, then they can go home and order either on a mobile app or website for the clothes they want.The company is trying to decide whether to focus their efforts on their mobile app experience or their website. They've hired you on contract to help them figure it out! Let's get started!Just follow the steps below to analyze the customer data (it's fake, don't worry I didn't give you real credit card numbers or emails). Imports** Import pandas, numpy, matplotlib,and seaborn. Then set %matplotlib inline (You'll import sklearn as you need it.)**
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Get the DataWe'll work with the Ecommerce Customers csv file from the company. It has Customer info, suchas Email, Address, and their color Avatar. Then it also has numerical value columns:* Avg. Session Length: Average session of in-store style advice sessions.* Time on App: Average time spent on App in minutes* Time on Website: Average time spent on Website in minutes* Length of Membership: How many years the customer has been a member. ** Read in the Ecommerce Customers csv file as a DataFrame called customers.**
###Code
customers=pd.read_csv('Ecommerce Customers')
###Output
_____no_output_____
###Markdown
**Check the head of customers, and check out its info() and describe() methods.**
###Code
customers.head()
customers.describe()
customers.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 500 entries, 0 to 499
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Email 500 non-null object
1 Address 500 non-null object
2 Avatar 500 non-null object
3 Avg. Session Length 500 non-null float64
4 Time on App 500 non-null float64
5 Time on Website 500 non-null float64
6 Length of Membership 500 non-null float64
7 Yearly Amount Spent 500 non-null float64
dtypes: float64(5), object(3)
memory usage: 31.4+ KB
###Markdown
Exploratory Data Analysis**Let's explore the data!**For the rest of the exercise we'll only be using the numerical data of the csv file.___**Use seaborn to create a jointplot to compare the Time on Website and Yearly Amount Spent columns. Does the correlation make sense?**
###Code
sns.set_style('whitegrid')
sns.jointplot(x='Time on Website', y='Yearly Amount Spent', data=customers)
###Output
_____no_output_____
###Markdown
** Do the same but with the Time on App column instead. **
###Code
sns.jointplot(x='Time on App', y='Yearly Amount Spent', data=customers)
###Output
_____no_output_____
###Markdown
** Use jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership.**
###Code
sns.jointplot(x='Time on App', y='Length of Membership', data=customers, kind='hex')
###Output
_____no_output_____
###Markdown
**Let's explore these types of relationships across the entire data set. Use [pairplot](https://stanford.edu/~mwaskom/software/seaborn/tutorial/axis_grids.htmlplotting-pairwise-relationships-with-pairgrid-and-pairplot) to recreate the plot below.(Don't worry about the the colors)**
###Code
sns.pairplot(customers)
###Output
_____no_output_____
###Markdown
**Based off this plot what looks to be the most correlated feature with Yearly Amount Spent?** Length of membership is the most correlated feature with yearly Amount Spent **Create a linear model plot (using seaborn's lmplot) of Yearly Amount Spent vs. Length of Membership. **
###Code
sns.lmplot(x='Yearly Amount Spent', y='Length of Membership', data=customers)
###Output
_____no_output_____
###Markdown
Training and Testing DataNow that we've explored the data a bit, let's go ahead and split the data into training and testing sets.** Set a variable X equal to the numerical features of the customers and a variable y equal to the "Yearly Amount Spent" column. **
###Code
X=customers[['Avg. Session Length', 'Time on App', 'Time on Website', 'Length of Membership']]
y=customers['Yearly Amount Spent']
###Output
_____no_output_____
###Markdown
** Use model_selection.train_test_split from sklearn to split the data into training and testing sets. Set test_size=0.3 and random_state=101**
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test=train_test_split(X, y, test_size=0.3, random_state=101)
###Output
_____no_output_____
###Markdown
Training the ModelNow its time to train our model on our training data!** Import LinearRegression from sklearn.linear_model **
###Code
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
###Output
_____no_output_____
###Markdown
**Create an instance of a LinearRegression() model named lm.**
###Code
lm = LinearRegression()
###Output
_____no_output_____
###Markdown
** Train/fit lm on the training data.**
###Code
lm.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
**Print out the coefficients of the model**
###Code
print('Coefficients:')
print(lm.coef_)
###Output
Coefficients:
[25.98154972 38.59015875 0.19040528 61.27909654]
###Markdown
Predicting Test DataNow that we have fit our model, let's evaluate its performance by predicting off the test values!** Use lm.predict() to predict off the X_test set of the data.**
###Code
pre=lm.predict(X_test)
###Output
_____no_output_____
###Markdown
** Create a scatterplot of the real test values versus the predicted values. **
###Code
plt.scatter(pre, y_test)
plt.xlabel('Y-Test')
plt.ylabel('Predicted Y');
###Output
_____no_output_____
###Markdown
Evaluating the ModelLet's evaluate our model performance by calculating the residual sum of squares and the explained variance score (R^2).** Calculate the Mean Absolute Error, Mean Squared Error, and the Root Mean Squared Error. Refer to the lecture or to Wikipedia for the formulas**
###Code
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test,pre))
print('MSE:', metrics.mean_squared_error(y_test,pre))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test,pre)))
print('R squared:',metrics.r2_score(y_test,pre))
from statsmodels.stats.outliers_influence import variance_inflation_factor
from scipy.stats import linregress
X.drop('const', axis = 1, inplace = True)
vif_data = pd.DataFrame()
vif_data['feature'] = X.columns
vif_data['vif'] = [variance_inflation_factor(X.values, i) for i in range(len(X.columns))]
vif_data
X = sm.add_constant(X)
reg_ols = sm.OLS(y, X).fit()
reg_ols.summary()
###Output
_____no_output_____
###Markdown
ResidualsYou should have gotten a very good model with a good fit. Let's quickly explore the residuals to make sure everything was okay with our data. **Plot a histogram of the residuals and make sure it looks normally distributed. Use either seaborn distplot, or just plt.hist().**
###Code
sns.distplot((y_test-pre),bins=50);
###Output
/Applications/anaconda3/lib/python3.8/site-packages/seaborn/distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
ConclusionWe still want to figure out the answer to the original question, do we focus our efforst on mobile app or website development? Or maybe that doesn't even really matter, and Membership Time is what is really important. Let's see if we can interpret the coefficients at all to get an idea.** Recreate the dataframe below. **
###Code
pd.DataFrame(data=lm.coef_,index=X_train.columns,columns=['Coeffecient'])
###Output
_____no_output_____ |
week_3/day_1_lecture.ipynb | ###Markdown
LECTURE OVERVIEW --- By the end of the lecture, you'll be able to:- use common data structures from the `collections` module- use infinite iterators from the `itertools` module- use terminating iterators from the `itertools` module- use combinatoric iterators from the `itertools` module COLLECTIONS & ITERTOOLS---The tools you will learn during this lecture can be solved with the tools you all ready know but the new tools will be much more efficient and produce cleaner code. For instance, you've learned how to count instances of unique elements in a `list` using `if` statements and `for` loops but there's actually a much quicker way to do this using a `Counter` object from the `collections` module.Making sure your code is efficient is very important for large scale projects.It is best practice to try to solve a problem yourself then research if someone else has solved it in a better way. The `collections` Module By the end of the lecture, you'll be able to:- **use common data structures from the `collections` module**- use infinite iterators from the itertools module- use terminating iterators from the itertools module- use combinatoric iterators from the itertools module**Collections** in Python are containers that are used to store collections of data. For example, `list`, `dict`, `set`, `tuple` are built-in collections. The `collections` module provides additional data structures to store collections of data.We will discuss a few commonly used data structures from the Python collections module:- `Counter`- `defaultdict`- `OrderedDict`- `deque` (pronounced *deck*) The `Counter`- ```pythonCounter(mapping_or_iterable)```: returns a dictionary where a key is an element in the `mapping_or_iterable` and value is the number of times that element exists
###Code
from collections import Counter
###Output
_____no_output_____
###Markdown
Creating a `Counter` objectThe simplest way is to use `Counter()` function without any arguments.
###Code
cnt = Counter()
###Output
_____no_output_____
###Markdown
You can pass an iterable (e.g., list) to `Counter()` function to create a `Counter` object.
###Code
lst = [1, 2, 3, 4, 1, 2, 6, 7, 3, 8, 1]
Counter(lst)
###Output
_____no_output_____
###Markdown
The `Counter()` function can take a dictionary as an argument. In this dictionary, the value of a key should be the *count* of that key.
###Code
Counter({1: 3, 2: 4})
###Output
_____no_output_____
###Markdown
A `Counter` object can also be initialized with key word arguments:
###Code
Counter(apples=4, oranges=8)
###Output
_____no_output_____
###Markdown
You can access any counter item with its key as shown below:
###Code
lst = [1, 2, 3, 4, 1, 2, 6, 7, 3, 8, 1]
cnt = Counter(lst)
cnt[1]
###Output
_____no_output_____
###Markdown
Lets take a look at a performance example:
###Code
import time
import random
import datetime
def man_count_elements(elem_lst):
elem_dict = {}
for elem in elem_lst:
if elem not in elem_dict:
elem_dict[elem] = 1
else:
elem_dict[elem] += 1
return elem_dict
def coll_count_elements(elem_lst):
from collections import Counter
return dict(Counter(elem_lst))
element_lst = [random.randrange(1, 1000, 1) for _ in range(10_000_000)]
start = time.time()
res_dict = man_count_elements(element_lst)
end = time.time()
runtime = end - start
print(f"man_count_elements() took {str(datetime.timedelta(seconds=runtime))}")
start = time.time()
res_dict = coll_count_elements(element_lst)
end = time.time()
runtime = end - start
print(f"coll_count_elements() took {str(datetime.timedelta(seconds=runtime))}")
###Output
_____no_output_____
###Markdown
`Counter` methodsSince a `Counter` object is a subclass of `dict`, it has all the methods from the `dict` class. `Counter` also has a few additional methods:1. ```pythonelements()```: returns an iterator containing counted elements
###Code
cnt = Counter(apples=3, bananas=4, cheese=0)
list(cnt.elements())
###Output
_____no_output_____
###Markdown
Notice how the count for *cheese* does not appear? If an element’s count is less than one, `elements()` will ignore it. 2. ```pythonmost_common(n=None)```: returns a list of the *n* more common elements and their counts
###Code
Counter('strawberries').most_common(3)
###Output
_____no_output_____
###Markdown
If *n* is omitted or `None`, it will return **all** elemenets in the counter.
###Code
Counter('strawberries').most_common()
###Output
_____no_output_____
###Markdown
3. ```pythonsubtract(mapping_or_iterable)```: elements are removed from `mapping_or_iterable`
###Code
cnt = Counter(apples=4, bananas=2, cheese=0, doughnuts=-2)
deduct = Counter(apples=1, bananas=2, cheese=3, doughnuts=4)
cnt.subtract(deduct)
cnt
###Output
_____no_output_____
###Markdown
To read more about `Counter` objects, check out the help output:
###Code
help(Counter)
###Output
_____no_output_____
###Markdown
** Exercise**I have a grocery list (i.e., `groceries`) that contains a list of food I need to buy, but before I could go to the store, my partner bought some food on the way home from work (i.e., `purchased`). I want to make sure that I don't over purchase a unit of food that has already been bought since we are on a budget. Speaking of a budget, we can only afford the **top 2 food items** on our list.Create a function that:- takes required arguments of `grocery_lst` and `purchased_lst`- takes optional arguments of `n_int=None`- utilizes `Counter()`, `subtract()`, and `most_common()`- returns a `list` of `(food, count)` pairsOnce created, pass in the correct parameters to your function to get the correct output.
###Code
groceries = ['apple', 'apple', 'apple', 'cake', 'cake', 'banana', 'chicken', 'chicken']
purchased = ['banana', 'chicken', 'apple']
# TODO: insert solution here
# >>> [('apple', 2), ('cake', 2)]
###Output
_____no_output_____
###Markdown
The `defaultdict````pythondefaultdict(default_type)```- works exactly like a `dict` but it doesn't throw a `KeyError` when accessing a non-existing key- initializes the key with the element of the default value of the passed in data type called `default_type`
###Code
from collections import defaultdict
###Output
_____no_output_____
###Markdown
Creating a `defaultdict` You can create a `defaultdict` by passing a data type as an argument:
###Code
num_fruits = defaultdict(int)
num_fruits['kiwis'] = 1
num_fruits['apples'] = 2
num_fruits['oranges']
###Output
_____no_output_____
###Markdown
In a normal dictionary, trying to access `oranges` would force a `KeyError` but since `defaultdict` initialize new keys with the default value of 0 for `int`, we get a return value of 0.To read more about `defaultdict` objects, check out help output:
###Code
help(defaultdict)
###Output
_____no_output_____
###Markdown
The `OrderedDict````pythonOrderedDict(items=None)```- Keys maintain the order in which they were inserted
###Code
from collections import OrderedDict
###Output
_____no_output_____
###Markdown
Creating a `OrderedDict`You can create an `OrderedDict` without passing arguments, where afterwards you can insert items into it.
###Code
od = OrderedDict()
od['rice'] = 1
od['bread'] = 2
od['burger'] = 3
od
###Output
_____no_output_____
###Markdown
Here, we create a `Counter` from a list and insert element to an `OrderedDict` based on their count. Most frequently occurring letter will be inserted as the first key and the least frequently occurring letter will be inserted as the last key.
###Code
groceries = ["avacado", "corn", "corn", "avacado", "avacado", "beer", "avacado", "beer", "corn"]
cnt = Counter(groceries)
od = OrderedDict(cnt.most_common())
for key, val in od.items():
print(key, val)
###Output
_____no_output_____
###Markdown
To read more about `OrderedDict` objects, check out the help output:
###Code
help(OrderedDict)
###Output
_____no_output_____
###Markdown
The `deque````pythondeque(iterable)```- A `deque` is a list optimized for inserting and removing items.
###Code
from collections import deque
###Output
_____no_output_____
###Markdown
Creating a `deque`To create a `deque`, pass a list into it.
###Code
groceries = ["avacado", "corn", "beer"]
grocery_deq = deque(groceries)
print(grocery_deq)
###Output
_____no_output_____
###Markdown
Inserting elementsYou can insert elements to the `deque` at either ends. To add an element to the *right*, you use the `append()` method. To add an elment to the *left*, you use the `appendleft()` method.
###Code
grocery_deq.append("dumplings")
grocery_deq.appendleft("eggs")
print(grocery_deq)
###Output
_____no_output_____
###Markdown
Removing elementsSimilarly to inserting, you can remove an element from the *right* end using `pop()` and `popleft()` to remove an element from the *left*.
###Code
grocery_deq.pop()
grocery_deq.popleft()
print(grocery_deq)
###Output
_____no_output_____
###Markdown
Clearing a `deque`To remove all the elements, you can use the `clear()` method.
###Code
groceries = ["avacado", "corn", "beer"]
grocery_deq = deque(groceries)
print(grocery_deq)
print(grocery_deq.clear())
###Output
_____no_output_____
###Markdown
Counting elementsIf you want to find the count of a specific element, use the `count(x)` method where `x` is the element you want to find.
###Code
groceries = ["fish", "ginger", "fish", "honey", "fish"]
deq = deque(groceries)
print(deq.count("fish"))
###Output
_____no_output_____
###Markdown
To read more about `deque` objects, check out the help output:
###Code
help(deque)
###Output
_____no_output_____
###Markdown
** Exercise**It is a new day and that means a new grocery list but this time it is represented as a `deque` (i.e., `groc_deq`). There are also children running around, hyped up on Caprisun, that love to wreak havac on deques.
###Code
groceries = ["apple", "bacon", "cake", "banana", "chicken"]
groc_deq = deque(groceries)
###Output
_____no_output_____
###Markdown
Implement the following actions using `deque` methods:- **child1** adds "cake" to the top of the list because...it's cake- **parent1** adds "beer" to the bottom of the list to relax- **child2** is currently angry with **child1** so **child2** removes **child1's** item- **child1** notices and adds 3 more "cake" to the top of the list in spite of **child2**- **parent2** thinks **parent1** should stop drinking so **parent2** removes **parent1's** item- **child2** takes away 1 of **child1's** item from the list- **parent1** removes the last 2 items in spite of **parent2**Answer the following questions about `groc_deq` after the above actions have been implemented:- What is the most common item in the deque?- What is the last item in the deque?
###Code
# TODO: insert solution here
###Output
_____no_output_____
###Markdown
To read more about the `collections` module, check out the [documentation](https://docs.python.org/3.8/library/collections.htmlmodule-collections). The `itertools` Module**Itertools** is a Python module designed to iterate over data structures that utilize computational resources effeciently. What are Iterators?An **iterator** is an object that will return data, one element at a time. Most built-in containers in Python are iterables (e.g., `list`, `tuple`, `string`, etc.). A Python iterator object must implement two special methods:1. `iter()`: returns an iterator2. `next()`: returns the next element within the iterator. Internal Workings of `for` LoopsA `for` loop can iterate over any iterable. The following loop```pythonfor element in iterable: do something with element```is actually implemented in Python as```python create an iterator object from that iterableiter_obj = iter(iterable) infinite loopwhile True: try: get the next item element = next(iter_obj) do something with element except StopIteration: if StopIteration is raised, break from loop break```Internally, the `for` loop creates an iterator object (i.e., `iter_obj`) by calling `iter()` on the iterable where the `for` loop is actually an **infinite** `while` loop. Inside the loop, it calls `next()` to get the next element and executes the body of the `for` loop with this value. After all the items have been exhausted, `StopIteration` is raised and the loop ends. Why use the `itertools` Module?The idea behind `itertools` is to deal with large amounts of data (typically sequence data sets) in a memory efficient way. While some iterators are **infinite**, some **terminate on the shortest input sequence**, and some are **combinatoric**. Infinite Iterators By the end of the lecture, you'll be able to:- use common data structures from the collections module- **use infinite iterators from the `itertools` module**- use terminating iterators from the itertools module- use combinatoric iterators from the itertools moduleInfinite iterators run indefinitely unless you include a stopping condition. We will cover the 3 inifinite iterators from `itertools`.**NOTE: Since these are infinite iterators you MUST include a terminating condition!**1. ```pythoncount(start=0, step=1)```: returns a sequence of values from `start` with intervals the size of `step`
###Code
from itertools import count
###Output
_____no_output_____
###Markdown
For example:
###Code
for i in count(10, 2):
print(i)
if i > 25: break
###Output
_____no_output_____
###Markdown
Here’s `count()` with one argument:
###Code
for i in count(2):
print(i)
if i >= 10: break
###Output
_____no_output_____
###Markdown
It takes a step of 1. If we call it without an argument, it starts with 0:
###Code
for i in count():
print(i)
if i >= 5: break
###Output
_____no_output_____
###Markdown
**Caution**If you don't have a stopping condition, you will need to stop your code by using the `Interupt the Kernel` button (or using `Ctrl-C` within your terminal).For example:
###Code
# for i in count():
# print(i)
###Output
_____no_output_____
###Markdown
To read more about the `count()` method, check out the help output:
###Code
help(count)
###Output
_____no_output_____
###Markdown
2. ```pythoncycle(iterable)```: makes an iterator from elements from an `iterable`, and saves a copy of each.
###Code
from itertools import cycle
###Output
_____no_output_____
###Markdown
For example
###Code
for count, i in enumerate(cycle(['carrots', 'granola', 'kabobs'])):
print(count, i)
if count == 10: break
###Output
_____no_output_____
###Markdown
To read more about `cycle()` method, check out the help output:
###Code
help(cycle)
###Output
_____no_output_____
###Markdown
3. ```pythonrepeat(element, n_times=None)```: repeat `element` by `n_times`
###Code
from itertools import repeat
###Output
_____no_output_____
###Markdown
For example:
###Code
for i in repeat("spinach", 3):
print(i)
###Output
_____no_output_____
###Markdown
Note, that since `n_times` is optional, we can endlessly repeat
###Code
for count, i in enumerate(repeat("yogurt")):
print(i)
if count >= 5: break
###Output
_____no_output_____
###Markdown
To read more about the `repeat()` method, check out the help output:
###Code
help(repeat)
###Output
_____no_output_____
###Markdown
** Exercise**We are going on a picnic with a community of neighbors! But no one has a blanket to lay on. You know someone who is a master blanket maker but they need to see a concept design first. You will design a 10x10 blanket with only 2 colors. The first color will repeat horizontally and the second color will follow the first color, also repeating horizontally. These two colors will repeat vertically until the correct measurements has been met.Create a blanket making function via `print()` statements with the following requirements:- takes required arguments of: - `color_lst`: list of single character colors (e.g., use 'r' for red) - `horiz_repeat`: number of times the color repeats horizontally - `vert_repeat`: number of times the colors repeat vertically- verify that only 2 colors are used- utilizes `cycle()` and `repeat()`
###Code
# TODO: insert solution here
###Output
_____no_output_____
###Markdown
Terminating iterators By the end of the lecture, you'll be able to:- use common data structures from the collections module- use infinite iterators from the itertools module- **use terminating iterators from the `itertools` module**- use combinatoric iterators from the itertools moduleTerminating iterators are used to work on short input sequences and produce the output based on the method used.We will cover the most common iterators from the module. 1. ```pythonaccumulate(iterable, func=None, ...)```: makes an iterator that returns accumulated sums (or accumulated results of a binary function specified)
###Code
from itertools import accumulate
###Output
_____no_output_____
###Markdown
For example:
###Code
lst = [0, 1, 0, 1, 1, 2, 3, 5]
for i in accumulate(lst):
print(i)
###Output
_____no_output_____
###Markdown
This also works with strings:
###Code
for i in accumulate('spinach'):
print(i)
###Output
_____no_output_____
###Markdown
We can also pass in a binary function to `accumulate()`. Here we will use the multiplication operator from the `operator` module and pass the multiplication function (i.e., `operator.mul`) to `accumulate()`.
###Code
import operator
lst = [1, 2, 3, 4, 5]
last_acc = lst[0]
for i, acc in enumerate(accumulate(lst, operator.mul)):
print(f"{lst[i]} * {last_acc} = {acc}")
last_acc = acc
###Output
_____no_output_____
###Markdown
Here we accumulate the `max` along the iterable:
###Code
lst = [2, 1, 4, 3, 5]
last_max_acc = lst[0]
for i, acc in enumerate(accumulate(lst, max)):
print(f"max({lst[i]}, {last_max_acc}) = {acc}")
if acc > last_max_acc:
last_max_acc = acc
###Output
_____no_output_____
###Markdown
To read more about the `accumulate()` method, check out the help output:
###Code
help(accumulate)
###Output
_____no_output_____
###Markdown
2. ```pythonchain(*iterables)```: makes an iterator that returns elements from the first iterable, then proceeds to the next iterable, until all iterables are exhaustedThe `*` operator is used to unpack an iterable into the arguments in the function call.```python>>> fruits = ['lemon', 'pear', 'watermelon', 'tomato']>>> print(fruits[0], fruits[1], fruits[2], fruits[3])lemon pear watermelon tomato>>> print(*fruits)lemon pear watermelon tomato```
###Code
from itertools import chain
###Output
_____no_output_____
###Markdown
For example:
###Code
for i in chain('acorn squash', 'bagels'):
print(i)
###Output
_____no_output_____
###Markdown
The `chain()` method is especially useful when you need to flatten a list of lists into a single list.
###Code
menu_items = [['asparagus', 'bison'], ['bluefish', 'beer'], ['milkshake']]
print(list(chain(*menu_items)))
###Output
_____no_output_____
###Markdown
To read more about the `chain()` method, check out the help output:
###Code
help(chain)
###Output
_____no_output_____
###Markdown
3. ```pythongroupby(iterable, key_func=None)```: makes an iterator that takes the `iterable`, and returns consecutive keys and groups where these sub-iterators are grouped by the key generated by `key_func`.
###Code
from itertools import groupby
###Output
_____no_output_____
###Markdown
If 'key_func' is not specified or is `None`, it defaults to an identity function and returns the element unchanged. Generally, the `iterable` needs to already be sorted on the same key function.
###Code
for key, group in groupby('AAAAABBCCCCCDDDCCCBBA'):
print({key: list(group)})
###Output
_____no_output_____
###Markdown
Let's take an example where we pass in a custom function to `groupby()`:
###Code
def meal_key(meal):
"""Assume the first element is the meal type"""
return meal[0]
meal_lst = [
("Breakfast", "eggs"),
("Breakfast", "orange juice"),
("Lunch", "sandwich"),
("Lunch", "tea"),
("Dinner", "pasta"),
("Dinner", "wine")
]
for key, group in groupby(meal_lst, key=meal_key):
print({key: list(group)})
###Output
_____no_output_____
###Markdown
To read more about the `groupby()` method, check out the help output:
###Code
help(groupby)
###Output
_____no_output_____
###Markdown
4. ```pythonstarmap(function, iterable)```: makes an iterator that takes arguments from the iterable, and computes a function
###Code
from itertools import starmap
###Output
_____no_output_____
###Markdown
For example, we use the subtraction operator from the `operator` module (i.e., `operator.sub`) to subtract the first element from the second element for each iterable, until exhausted:
###Code
lst = [(2,1), (7,3), (15,10)]
for i in starmap(operator.sub, lst):
print(i)
###Output
_____no_output_____
###Markdown
To read more about the `starmap()` method, check out the help output:
###Code
help(starmap)
###Output
_____no_output_____
###Markdown
** Exercise**A few local food shelters heard about your community picnic and has some extra food that they want to donate. For days Monday, Tuesday, and Wednesday, the shelters can donate the same select amount of food for each day. We want to quickly count the accumulated food that the shelters can donate from day to day, for each food item. The days mentioned are stored as a list in `days` and the donated food is stored as a list of lists in `donated_food` such that each list in the `donated_food` list represents the food donated from a shelter.
###Code
days = ['Monday', 'Tuesday', 'Wednesday']
donated_food = [['sandwich', 'chips', 'sandwich'], ['sandwich', 'chicken', 'chips']]
###Output
_____no_output_____
###Markdown
I've created a function, `count_donated_food()`, that:- takes a required argument `food_lst_lst` that is a list of lists- flattens `food_lst_lst`- returns a `Counter` that contains counts for each food items
###Code
def count_donated_food(food_lst_lst):
food_lst = list(chain(*food_lst_lst))
return Counter(food_lst)
###Output
_____no_output_____
###Markdown
You will create a function that:- takes required aruguments: - `donated_food_cnt`: `Counter` that contains counts for each food items - `days_lst`: list of days- for each food item, use `print()` to show a list of accumulated day, counts pairs - e.g., sandwich [('Monday', 3), ('Tuesday', 6), ('Wednesday', 9)]- utilizes `accumulate()`
###Code
# TODO: insert solution here
# >>> sandwich [('Monday', 3), ('Tuesday', 6), ('Wednesday', 9)]
# ... chips [('Monday', 2), ('Tuesday', 4), ('Wednesday', 6)]
# ... chicken [('Monday', 1), ('Tuesday', 2), ('Wednesday', 3)]
###Output
_____no_output_____
###Markdown
HINT: if you are having trouble, here is some pseudo code to guide you (please try to figure it out yourself first)- for each food `item` and `count` in `donated_food_cnt`: - let `day_counts_lst` be a list where the food item count repeats for the number of days - assign an empty list, `acc_lst`, for saving your results - for each `index` and `accumulator` value from enumerating accumulating `day_counts_lst`: - append the tupled result of `days_lst[index]` and `accumulator` to `acc_lst` - print `item` and `acc_lst` Combinatoric iterators By the end of the lecture, you'll be able to:- use common data structures from the collections module- use infinite iterators from the itertools module- use terminating iterators from the itertools module- **use combinatoric iterators from the `itertools` module**Combinatoric iterators deal with arranging, operating on, and selecting of combinatorial discrete mathematical elements.The [**cartesian product**](https://en.wikipedia.org/wiki/Cartesian_product) between two variables, `A` and `B`, is the set of all ordered pairs, denoted as `AxB`.![](day_1_assets/Cartesian_Product_qtl1.svg)1. ```pythonproduct(*iterables, repeat=1)```: returns the cartesion product of the input iterables
###Code
from itertools import product
###Output
_____no_output_____
###Markdown
For example:
###Code
alph_lst = ['A', 'B', 'C']
for i in product(alph_lst, alph_lst):
print(i)
###Output
_____no_output_____
###Markdown
If we pass `repeat=2`, the rightmost element advances with every iteration:
###Code
alph_lst = ['A', 'B', 'C']
for i in product(alph_lst, alph_lst, repeat=2):
print(i)
###Output
_____no_output_____
###Markdown
To read more about the `product()` method, check out the help output:
###Code
help(product)
###Output
_____no_output_____
###Markdown
A [**permutation**](https://en.wikipedia.org/wiki/Permutation) of a set contains all possible arrangements of it's members **where order matters**.![](day_1_assets/Permutations_RGB.svg)2. ```pythonpermutations(iterable, r=None)```: returns `r`-length permutations of elements in the `iterable` in lexicographic order (i.e., dictionary order), and there is no repetition of elements
###Code
from itertools import permutations
###Output
_____no_output_____
###Markdown
For example:
###Code
alph_lst = ['A', 'B', 'C']
for i in permutations(alph_lst):
print(i)
###Output
_____no_output_____
###Markdown
If we pass `r=2` to it, it will print tuples of length 2.
###Code
for i in permutations(alph_lst, r=2):
print(i)
###Output
_____no_output_____
###Markdown
To read more about the `permutations()` method, check out the help output:
###Code
help(permutations)
###Output
_____no_output_____
###Markdown
A [**combination**](https://en.wikipedia.org/wiki/Combination) of a set contains all possible arrangements of it's members where **order does not matter**.3. ```pythoncombinations(iterable, r)```: returns subsequences of length `r` from the elements of the `iterable`
###Code
from itertools import combinations
###Output
_____no_output_____
###Markdown
The combination tuples are emitted in lexicographic ordering according to the order of the input `iterable`. So, if the input `iterable` is sorted, the combination tuples will be produced in sorted order.Elements are treated as unique based on their position, not on their value. So if the input elements are unique, there will be no repeat values in each combination.
###Code
for i in combinations('ABC', 2):
print(i)
###Output
_____no_output_____
###Markdown
If you noticed, this only returns the tuples that are lexicographically ascending. Here's another example:
###Code
for i in combinations('ABCD', 3):
print(i)
###Output
_____no_output_____
###Markdown
To read more about the `combinations()` method, check out the help output:
###Code
help(combinations)
###Output
_____no_output_____
###Markdown
4. ```pythoncombinations_with_replacement(iterable, r)```: returns `r`-length subsequences of elements of the `iterable` where individual elements may repeat
###Code
from itertools import combinations_with_replacement as cwr
###Output
_____no_output_____
###Markdown
For example:
###Code
alph_lst = ['A', 'B', 'C']
for i in cwr(alph_lst, 2):
print(i)
###Output
_____no_output_____
###Markdown
To read more about the `combinations_with_replacement()` method, check out the help output:
###Code
help(cwr)
###Output
_____no_output_____
###Markdown
LECTURE OVERVIEW --- By the end of the lecture, you'll be able to:- import modules/packages- use common data structures from the `collections` module- use infinite iterators from the `itertools` module- use terminating iterators from the `itertools` module- use combinatoric iterators from the `itertools` module MODULES By the end of the lecture, you'll be able to:- **import modules/packages**- use common data structures from the collections module- use infinite iterators from the itertools module- use terminating iterators from the itertools module- use combinatoric iterators from the itertools moduleWhat are modules/packages?- libraries of code- specific to tasks/functions- a lot of common functions are already written by computer scientists and are much faster than you can write- we will be using packages in addition to base Python in the next two weeks
###Code
# how to get mean of `nums_lst`?
nums_list = [1, 2, 3, 4, 5, 10, 20, 50, 200]
###Output
_____no_output_____
###Markdown
Let's google it!
###Code
import numpy
print(numpy.mean(nums_list))
import numpy as np
print(np.mean(nums_list))
from numpy import mean
print(mean(nums_list))
help(np.mean)
###Output
Help on function mean in module numpy:
mean(a, axis=None, dtype=None, out=None, keepdims=<no value>, *, where=<no value>)
Compute the arithmetic mean along the specified axis.
Returns the average of the array elements. The average is taken over
the flattened array by default, otherwise over the specified axis.
`float64` intermediate and return values are used for integer inputs.
Parameters
----------
a : array_like
Array containing numbers whose mean is desired. If `a` is not an
array, a conversion is attempted.
axis : None or int or tuple of ints, optional
Axis or axes along which the means are computed. The default is to
compute the mean of the flattened array.
.. versionadded:: 1.7.0
If this is a tuple of ints, a mean is performed over multiple axes,
instead of a single axis or all the axes as before.
dtype : data-type, optional
Type to use in computing the mean. For integer inputs, the default
is `float64`; for floating point inputs, it is the same as the
input dtype.
out : ndarray, optional
Alternate output array in which to place the result. The default
is ``None``; if provided, it must have the same shape as the
expected output, but the type will be cast if necessary.
See :ref:`ufuncs-output-type` for more details.
keepdims : bool, optional
If this is set to True, the axes which are reduced are left
in the result as dimensions with size one. With this option,
the result will broadcast correctly against the input array.
If the default value is passed, then `keepdims` will not be
passed through to the `mean` method of sub-classes of
`ndarray`, however any non-default value will be. If the
sub-class' method does not implement `keepdims` any
exceptions will be raised.
where : array_like of bool, optional
Elements to include in the mean. See `~numpy.ufunc.reduce` for details.
.. versionadded:: 1.20.0
Returns
-------
m : ndarray, see dtype parameter above
If `out=None`, returns a new array containing the mean values,
otherwise a reference to the output array is returned.
See Also
--------
average : Weighted average
std, var, nanmean, nanstd, nanvar
Notes
-----
The arithmetic mean is the sum of the elements along the axis divided
by the number of elements.
Note that for floating-point input, the mean is computed using the
same precision the input has. Depending on the input data, this can
cause the results to be inaccurate, especially for `float32` (see
example below). Specifying a higher-precision accumulator using the
`dtype` keyword can alleviate this issue.
By default, `float16` results are computed using `float32` intermediates
for extra precision.
Examples
--------
>>> a = np.array([[1, 2], [3, 4]])
>>> np.mean(a)
2.5
>>> np.mean(a, axis=0)
array([2., 3.])
>>> np.mean(a, axis=1)
array([1.5, 3.5])
In single precision, `mean` can be inaccurate:
>>> a = np.zeros((2, 512*512), dtype=np.float32)
>>> a[0, :] = 1.0
>>> a[1, :] = 0.1
>>> np.mean(a)
0.54999924
Computing the mean in float64 is more accurate:
>>> np.mean(a, dtype=np.float64)
0.55000000074505806 # may vary
Specifying a where argument:
>>> a = np.array([[5, 9, 13], [14, 10, 12], [11, 15, 19]])
>>> np.mean(a)
12.0
>>> np.mean(a, where=[[True], [False], [False]])
9.0
###Markdown
** Exercise**Google the standard deviation function from the `numpy` python package. Import the package and then use the function on `nums_list`.
###Code
# TODO: insert solution here
###Output
_____no_output_____
###Markdown
COLLECTIONS & ITERTOOLS---The tools you will learn during this lecture can be solved with the tools you all ready know but the new tools will be much more efficient and produce cleaner code. For instance, you've learned how to count instances of unique elements in a `list` using `if` statements and `for` loops but there's actually a much quicker way to do this using a `Counter` object from the `collections` module.Making sure your code is efficient is very important for large scale projects.It is best practice to try to solve a problem yourself then research if someone else has solved it in a better way. The `collections` Module By the end of the lecture, you'll be able to:- import modules/packages- **use common data structures from the `collections` module**- use infinite iterators from the itertools module- use terminating iterators from the itertools module- use combinatoric iterators from the itertools module**Collections** in Python are containers that are used to store collections of data. For example, `list`, `dict`, `set`, `tuple` are built-in collections. The `collections` module provides additional data structures to store collections of data.We will discuss a few commonly used data structures from the Python collections module:- `Counter`- `defaultdict`- `OrderedDict`- `deque` (pronounced *deck*) The `Counter`- ```pythonCounter(mapping_or_iterable)```: returns a dictionary where a key is an element in the `mapping_or_iterable` and value is the number of times that element exists
###Code
from collections import Counter
###Output
_____no_output_____
###Markdown
Creating a `Counter` objectThe simplest way is to use `Counter()` function without any arguments.
###Code
cnt = Counter()
###Output
_____no_output_____
###Markdown
You can pass an iterable (e.g., list) to `Counter()` function to create a `Counter` object.
###Code
lst = [1, 2, 3, 4, 1, 2, 6, 7, 3, 8, 1]
Counter(lst)
###Output
_____no_output_____
###Markdown
The `Counter()` function can take a dictionary as an argument. In this dictionary, the value of a key should be the *count* of that key.
###Code
Counter({1: 3, 2: 4})
###Output
_____no_output_____
###Markdown
A `Counter` object can also be initialized with key word arguments:
###Code
Counter(apples=4, oranges=8)
###Output
_____no_output_____
###Markdown
You can access any counter item with its key as shown below:
###Code
lst = [1, 2, 3, 4, 1, 2, 6, 7, 3, 8, 1]
cnt = Counter(lst)
cnt[1]
###Output
_____no_output_____
###Markdown
Lets take a look at a performance example:
###Code
import time
import random
import datetime
def man_count_elements(elem_lst):
elem_dict = {}
for elem in elem_lst:
if elem not in elem_dict:
elem_dict[elem] = 1
else:
elem_dict[elem] += 1
return elem_dict
def coll_count_elements(elem_lst):
from collections import Counter
return dict(Counter(elem_lst))
element_lst = [random.randrange(1, 1000, 1) for _ in range(10_000_000)]
start = time.time()
res_dict = man_count_elements(element_lst)
end = time.time()
runtime = end - start
print(f"man_count_elements() took {str(datetime.timedelta(seconds=runtime))}")
start = time.time()
res_dict = coll_count_elements(element_lst)
end = time.time()
runtime = end - start
print(f"coll_count_elements() took {str(datetime.timedelta(seconds=runtime))}")
###Output
_____no_output_____
###Markdown
`Counter` methodsSince a `Counter` object is a subclass of `dict`, it has all the methods from the `dict` class. `Counter` also has a few additional methods:1. ```pythonelements()```: returns an iterator containing counted elements
###Code
cnt = Counter(apples=3, bananas=4, cheese=0)
list(cnt.elements())
###Output
_____no_output_____
###Markdown
Notice how the count for *cheese* does not appear? If an element’s count is less than one, `elements()` will ignore it. 2. ```pythonmost_common(n=None)```: returns a list of the *n* more common elements and their counts
###Code
Counter('strawberries').most_common(3)
###Output
_____no_output_____
###Markdown
If *n* is omitted or `None`, it will return **all** elemenets in the counter.
###Code
Counter('strawberries').most_common()
###Output
_____no_output_____
###Markdown
3. ```pythonsubtract(mapping_or_iterable)```: elements are removed from `mapping_or_iterable`
###Code
cnt = Counter(apples=4, bananas=2, cheese=0, doughnuts=-2)
deduct = Counter(apples=1, bananas=2, cheese=3, doughnuts=4)
cnt.subtract(deduct)
cnt
###Output
_____no_output_____
###Markdown
To read more about `Counter` objects, check out the help output:
###Code
help(Counter)
###Output
_____no_output_____
###Markdown
** Exercise**I have a grocery list (i.e., `groceries`) that contains a list of food I need to buy, but before I could go to the store, my partner bought some food on the way home from work (i.e., `purchased`). I want to make sure that I don't over purchase a unit of food that has already been bought since we are on a budget. Speaking of a budget, we can only afford the **top 2 food items** on our list.Create a function that:- takes required arguments of `grocery_lst` and `purchased_lst`- takes optional arguments of `n_int=None`- utilizes `Counter()`, `subtract()`, and `most_common()`- returns a `list` of `(food, count)` pairsOnce created, pass in the correct parameters to your function to get the correct output.
###Code
groceries = ['apple', 'apple', 'apple', 'cake', 'cake', 'banana', 'chicken', 'chicken']
purchased = ['banana', 'chicken', 'apple']
# TODO: insert solution here
# >>> [('apple', 2), ('cake', 2)]
###Output
_____no_output_____
###Markdown
The `defaultdict````pythondefaultdict(default_type)```- works exactly like a `dict` but it doesn't throw a `KeyError` when accessing a non-existing key- initializes the key with the element of the default value of the passed in data type called `default_type`
###Code
from collections import defaultdict
###Output
_____no_output_____
###Markdown
Creating a `defaultdict` You can create a `defaultdict` by passing a data type as an argument:
###Code
num_fruits = defaultdict(int)
num_fruits['kiwis'] = 1
num_fruits['apples'] = 2
num_fruits['oranges']
###Output
_____no_output_____
###Markdown
In a normal dictionary, trying to access `oranges` would force a `KeyError` but since `defaultdict` initialize new keys with the default value of 0 for `int`, we get a return value of 0.To read more about `defaultdict` objects, check out help output:
###Code
help(defaultdict)
###Output
_____no_output_____
###Markdown
The `OrderedDict````pythonOrderedDict(items=None)```- Keys maintain the order in which they were inserted
###Code
from collections import OrderedDict
###Output
_____no_output_____
###Markdown
Creating a `OrderedDict`You can create an `OrderedDict` without passing arguments, where afterwards you can insert items into it.
###Code
od = OrderedDict()
od['rice'] = 1
od['bread'] = 2
od['burger'] = 3
od
###Output
_____no_output_____
###Markdown
Here, we create a `Counter` from a list and insert element to an `OrderedDict` based on their count. Most frequently occurring letter will be inserted as the first key and the least frequently occurring letter will be inserted as the last key.
###Code
groceries = ["avacado", "corn", "corn", "avacado", "avacado", "beer", "avacado", "beer", "corn"]
cnt = Counter(groceries)
od = OrderedDict(cnt.most_common())
for key, val in od.items():
print(key, val)
###Output
_____no_output_____
###Markdown
To read more about `OrderedDict` objects, check out the help output:
###Code
help(OrderedDict)
###Output
_____no_output_____
###Markdown
The `deque````pythondeque(iterable)```- A `deque` is a list optimized for inserting and removing items.
###Code
from collections import deque
###Output
_____no_output_____
###Markdown
Creating a `deque`To create a `deque`, pass a list into it.
###Code
groceries = ["avacado", "corn", "beer"]
grocery_deq = deque(groceries)
print(grocery_deq)
###Output
_____no_output_____
###Markdown
Inserting elementsYou can insert elements to the `deque` at either ends. To add an element to the *right*, you use the `append()` method. To add an elment to the *left*, you use the `appendleft()` method.
###Code
grocery_deq.append("dumplings")
grocery_deq.appendleft("eggs")
print(grocery_deq)
###Output
_____no_output_____
###Markdown
Removing elementsSimilarly to inserting, you can remove an element from the *right* end using `pop()` and `popleft()` to remove an element from the *left*.
###Code
grocery_deq.pop()
grocery_deq.popleft()
print(grocery_deq)
###Output
_____no_output_____
###Markdown
Clearing a `deque`To remove all the elements, you can use the `clear()` method.
###Code
groceries = ["avacado", "corn", "beer"]
grocery_deq = deque(groceries)
print(grocery_deq)
print(grocery_deq.clear())
###Output
_____no_output_____
###Markdown
Counting elementsIf you want to find the count of a specific element, use the `count(x)` method where `x` is the element you want to find.
###Code
groceries = ["fish", "ginger", "fish", "honey", "fish"]
deq = deque(groceries)
print(deq.count("fish"))
###Output
_____no_output_____
###Markdown
To read more about `deque` objects, check out the help output:
###Code
help(deque)
###Output
_____no_output_____
###Markdown
** Exercise**It is a new day and that means a new grocery list but this time it is represented as a `deque` (i.e., `groc_deq`). There are also children running around, hyped up on Caprisun, that love to wreak havac on deques.
###Code
groceries = ["apple", "bacon", "cake", "banana", "chicken"]
groc_deq = deque(groceries)
###Output
_____no_output_____
###Markdown
Implement the following actions using `deque` methods:- **child1** adds "cake" to the top of the list because...it's cake- **parent1** adds "beer" to the bottom of the list to relax- **child2** is currently angry with **child1** so **child2** removes **child1's** item- **child1** notices and adds 3 more "cake" to the top of the list in spite of **child2**- **parent2** thinks **parent1** should stop drinking so **parent2** removes **parent1's** item- **child2** takes away 1 of **child1's** item from the list- **parent1** removes the last 2 items in spite of **parent2**Answer the following questions about `groc_deq` after the above actions have been implemented:- What is the most common item in the deque?- What is the last item in the deque?
###Code
# TODO: insert solution here
###Output
_____no_output_____
###Markdown
To read more about the `collections` module, check out the [documentation](https://docs.python.org/3.8/library/collections.htmlmodule-collections). The `itertools` Module**Itertools** is a Python module designed to iterate over data structures that utilize computational resources effeciently. What are Iterators?An **iterator** is an object that will return data, one element at a time. Most built-in containers in Python are iterables (e.g., `list`, `tuple`, `string`, etc.). A Python iterator object must implement two special methods:1. `iter()`: returns an iterator2. `next()`: returns the next element within the iterator. Internal Workings of `for` LoopsA `for` loop can iterate over any iterable. The following loop```pythonfor element in iterable: do something with element```is actually implemented in Python as```python create an iterator object from that iterableiter_obj = iter(iterable) infinite loopwhile True: try: get the next item element = next(iter_obj) do something with element except StopIteration: if StopIteration is raised, break from loop break```Internally, the `for` loop creates an iterator object (i.e., `iter_obj`) by calling `iter()` on the iterable where the `for` loop is actually an **infinite** `while` loop. Inside the loop, it calls `next()` to get the next element and executes the body of the `for` loop with this value. After all the items have been exhausted, `StopIteration` is raised and the loop ends. Why use the `itertools` Module?The idea behind `itertools` is to deal with large amounts of data (typically sequence data sets) in a memory efficient way. While some iterators are **infinite**, some **terminate on the shortest input sequence**, and some are **combinatoric**. Infinite Iterators By the end of the lecture, you'll be able to:- import modules/packages- use common data structures from the collections module- **use infinite iterators from the `itertools` module**- use terminating iterators from the itertools module- use combinatoric iterators from the itertools moduleInfinite iterators run indefinitely unless you include a stopping condition. We will cover the 3 inifinite iterators from `itertools`.**NOTE: Since these are infinite iterators you MUST include a terminating condition!**1. ```pythoncount(start=0, step=1)```: returns a sequence of values from `start` with intervals the size of `step`
###Code
from itertools import count
###Output
_____no_output_____
###Markdown
For example:
###Code
for i in count(10, 2):
print(i)
if i > 25: break
###Output
_____no_output_____
###Markdown
Here’s `count()` with one argument:
###Code
for i in count(2):
print(i)
if i >= 10: break
###Output
_____no_output_____
###Markdown
It takes a step of 1. If we call it without an argument, it starts with 0:
###Code
for i in count():
print(i)
if i >= 5: break
###Output
_____no_output_____
###Markdown
**Caution**If you don't have a stopping condition, you will need to stop your code by using the `Interupt the Kernel` button (or using `Ctrl-C` within your terminal).For example:
###Code
# for i in count():
# print(i)
###Output
_____no_output_____
###Markdown
To read more about the `count()` method, check out the help output:
###Code
help(count)
###Output
_____no_output_____
###Markdown
2. ```pythoncycle(iterable)```: makes an iterator from elements from an `iterable`, and saves a copy of each.
###Code
from itertools import cycle
###Output
_____no_output_____
###Markdown
For example
###Code
for count, i in enumerate(cycle(['carrots', 'granola', 'kabobs'])):
print(count, i)
if count == 10: break
###Output
_____no_output_____
###Markdown
To read more about `cycle()` method, check out the help output:
###Code
help(cycle)
###Output
_____no_output_____
###Markdown
3. ```pythonrepeat(element, n_times=None)```: repeat `element` by `n_times`
###Code
from itertools import repeat
###Output
_____no_output_____
###Markdown
For example:
###Code
for i in repeat("spinach", 3):
print(i)
###Output
_____no_output_____
###Markdown
Note, that since `n_times` is optional, we can endlessly repeat
###Code
for count, i in enumerate(repeat("yogurt")):
print(i)
if count >= 5: break
###Output
_____no_output_____
###Markdown
To read more about the `repeat()` method, check out the help output:
###Code
help(repeat)
###Output
_____no_output_____
###Markdown
** Exercise**We are going on a picnic with a community of neighbors! But no one has a blanket to lay on. You know someone who is a master blanket maker but they need to see a concept design first. You will design a 10x10 blanket with only 2 colors. The first color will repeat horizontally and the second color will follow the first color, also repeating horizontally. These two colors will repeat vertically until the correct measurements has been met.Create a blanket making function via `print()` statements with the following requirements:- takes required arguments of: - `color_lst`: list of single character colors (e.g., use 'r' for red) - `horiz_repeat`: number of times the color repeats horizontally - `vert_repeat`: number of times the colors repeat vertically- verify that only 2 colors are used- utilizes `cycle()` and `repeat()`
###Code
# TODO: insert solution here
###Output
_____no_output_____
###Markdown
Terminating iterators By the end of the lecture, you'll be able to:- import modules/packages- use common data structures from the collections module- use infinite iterators from the itertools module- **use terminating iterators from the `itertools` module**- use combinatoric iterators from the itertools moduleTerminating iterators are used to work on short input sequences and produce the output based on the method used.We will cover the most common iterators from the module. 1. ```pythonaccumulate(iterable, func=None, ...)```: makes an iterator that returns accumulated sums (or accumulated results of a binary function specified)
###Code
from itertools import accumulate
###Output
_____no_output_____
###Markdown
For example:
###Code
lst = [0, 1, 0, 1, 1, 2, 3, 5]
for i in accumulate(lst):
print(i)
###Output
_____no_output_____
###Markdown
This also works with strings:
###Code
for i in accumulate('spinach'):
print(i)
###Output
_____no_output_____
###Markdown
We can also pass in a binary function to `accumulate()`. Here we will use the multiplication operator from the `operator` module and pass the multiplication function (i.e., `operator.mul`) to `accumulate()`.
###Code
import operator
lst = [1, 2, 3, 4, 5]
last_acc = lst[0]
for i, acc in enumerate(accumulate(lst, operator.mul)):
print(f"{lst[i]} * {last_acc} = {acc}")
last_acc = acc
###Output
_____no_output_____
###Markdown
Here we accumulate the `max` along the iterable:
###Code
lst = [2, 1, 4, 3, 5]
last_max_acc = lst[0]
for i, acc in enumerate(accumulate(lst, max)):
print(f"max({lst[i]}, {last_max_acc}) = {acc}")
if acc > last_max_acc:
last_max_acc = acc
###Output
_____no_output_____
###Markdown
To read more about the `accumulate()` method, check out the help output:
###Code
help(accumulate)
###Output
_____no_output_____
###Markdown
2. ```pythonchain(*iterables)```: makes an iterator that returns elements from the first iterable, then proceeds to the next iterable, until all iterables are exhaustedThe `*` operator is used to unpack an iterable into the arguments in the function call.```python>>> fruits = ['lemon', 'pear', 'watermelon', 'tomato']>>> print(fruits[0], fruits[1], fruits[2], fruits[3])lemon pear watermelon tomato>>> print(*fruits)lemon pear watermelon tomato```
###Code
from itertools import chain
###Output
_____no_output_____
###Markdown
For example:
###Code
for i in chain('acorn squash', 'bagels'):
print(i)
###Output
_____no_output_____
###Markdown
The `chain()` method is especially useful when you need to flatten a list of lists into a single list.
###Code
menu_items = [['asparagus', 'bison'], ['bluefish', 'beer'], ['milkshake']]
print(list(chain(*menu_items)))
###Output
_____no_output_____
###Markdown
To read more about the `chain()` method, check out the help output:
###Code
help(chain)
###Output
_____no_output_____
###Markdown
3. ```pythongroupby(iterable, key_func=None)```: makes an iterator that takes the `iterable`, and returns consecutive keys and groups where these sub-iterators are grouped by the key generated by `key_func`.
###Code
from itertools import groupby
###Output
_____no_output_____
###Markdown
If 'key_func' is not specified or is `None`, it defaults to an identity function and returns the element unchanged. Generally, the `iterable` needs to already be sorted on the same key function.
###Code
for key, group in groupby('AAAAABBCCCCCDDDCCCBBA'):
print({key: list(group)})
###Output
_____no_output_____
###Markdown
Let's take an example where we pass in a custom function to `groupby()`:
###Code
def meal_key(meal):
"""Assume the first element is the meal type"""
return meal[0]
meal_lst = [
("Breakfast", "eggs"),
("Breakfast", "orange juice"),
("Lunch", "sandwich"),
("Lunch", "tea"),
("Dinner", "pasta"),
("Dinner", "wine")
]
for key, group in groupby(meal_lst, key=meal_key):
print({key: list(group)})
###Output
_____no_output_____
###Markdown
To read more about the `groupby()` method, check out the help output:
###Code
help(groupby)
###Output
_____no_output_____
###Markdown
4. ```pythonstarmap(function, iterable)```: makes an iterator that takes arguments from the iterable, and computes a function
###Code
from itertools import starmap
###Output
_____no_output_____
###Markdown
For example, we use the subtraction operator from the `operator` module (i.e., `operator.sub`) to subtract the first element from the second element for each iterable, until exhausted:
###Code
lst = [(2,1), (7,3), (15,10)]
for i in starmap(operator.sub, lst):
print(i)
###Output
_____no_output_____
###Markdown
To read more about the `starmap()` method, check out the help output:
###Code
help(starmap)
###Output
_____no_output_____
###Markdown
** Exercise**A few local food shelters heard about your community picnic and has some extra food that they want to donate. For days Monday, Tuesday, and Wednesday, the shelters can donate the same select amount of food for each day. We want to quickly count the accumulated food that the shelters can donate from day to day, for each food item. The days mentioned are stored as a list in `days` and the donated food is stored as a list of lists in `donated_food` such that each list in the `donated_food` list represents the food donated from a shelter.
###Code
days = ['Monday', 'Tuesday', 'Wednesday']
donated_food = [['sandwich', 'chips', 'sandwich'], ['sandwich', 'chicken', 'chips']]
###Output
_____no_output_____
###Markdown
I've created a function, `count_donated_food()`, that:- takes a required argument `food_lst_lst` that is a list of lists- flattens `food_lst_lst`- returns a `Counter` that contains counts for each food items
###Code
def count_donated_food(food_lst_lst):
food_lst = list(chain(*food_lst_lst))
return Counter(food_lst)
###Output
_____no_output_____
###Markdown
You will create a function that:- takes required aruguments: - `donated_food_cnt`: `Counter` that contains counts for each food items - `days_lst`: list of days- for each food item, use `print()` to show a list of accumulated day, counts pairs - e.g., sandwich [('Monday', 3), ('Tuesday', 6), ('Wednesday', 9)]- utilizes `accumulate()`
###Code
# TODO: insert solution here
# >>> sandwich [('Monday', 3), ('Tuesday', 6), ('Wednesday', 9)]
# ... chips [('Monday', 2), ('Tuesday', 4), ('Wednesday', 6)]
# ... chicken [('Monday', 1), ('Tuesday', 2), ('Wednesday', 3)]
###Output
_____no_output_____
###Markdown
HINT: if you are having trouble, here is some pseudo code to guide you (please try to figure it out yourself first)- for each food `item` and `count` in `donated_food_cnt`: - let `day_counts_lst` be a list where the food item count repeats for the number of days - assign an empty list, `acc_lst`, for saving your results - for each `index` and `accumulator` value from enumerating accumulating `day_counts_lst`: - append the tupled result of `days_lst[index]` and `accumulator` to `acc_lst` - print `item` and `acc_lst` Combinatoric iterators By the end of the lecture, you'll be able to:- import modules/packages- use common data structures from the collections module- use infinite iterators from the itertools module- use terminating iterators from the itertools module- **use combinatoric iterators from the `itertools` module**Combinatoric iterators deal with arranging, operating on, and selecting of combinatorial discrete mathematical elements.The [**cartesian product**](https://en.wikipedia.org/wiki/Cartesian_product) between two variables, `A` and `B`, is the set of all ordered pairs, denoted as `AxB`.![](day_1_assets/Cartesian_Product_qtl1.svg)1. ```pythonproduct(*iterables, repeat=1)```: returns the cartesion product of the input iterables
###Code
from itertools import product
###Output
_____no_output_____
###Markdown
For example:
###Code
alph_lst = ['A', 'B', 'C']
for i in product(alph_lst, alph_lst):
print(i)
###Output
_____no_output_____
###Markdown
If we pass `repeat=2`, the rightmost element advances with every iteration:
###Code
alph_lst = ['A', 'B', 'C']
for i in product(alph_lst, alph_lst, repeat=2):
print(i)
###Output
_____no_output_____
###Markdown
To read more about the `product()` method, check out the help output:
###Code
help(product)
###Output
_____no_output_____
###Markdown
A [**permutation**](https://en.wikipedia.org/wiki/Permutation) of a set contains all possible arrangements of it's members **where order matters**.![](day_1_assets/Permutations_RGB.svg)2. ```pythonpermutations(iterable, r=None)```: returns `r`-length permutations of elements in the `iterable` in lexicographic order (i.e., dictionary order), and there is no repetition of elements
###Code
from itertools import permutations
###Output
_____no_output_____
###Markdown
For example:
###Code
alph_lst = ['A', 'B', 'C']
for i in permutations(alph_lst):
print(i)
###Output
_____no_output_____
###Markdown
If we pass `r=2` to it, it will print tuples of length 2.
###Code
for i in permutations(alph_lst, r=2):
print(i)
###Output
_____no_output_____
###Markdown
To read more about the `permutations()` method, check out the help output:
###Code
help(permutations)
###Output
_____no_output_____
###Markdown
A [**combination**](https://en.wikipedia.org/wiki/Combination) of a set contains all possible arrangements of it's members where **order does not matter**.3. ```pythoncombinations(iterable, r)```: returns subsequences of length `r` from the elements of the `iterable`
###Code
from itertools import combinations
###Output
_____no_output_____
###Markdown
The combination tuples are emitted in lexicographic ordering according to the order of the input `iterable`. So, if the input `iterable` is sorted, the combination tuples will be produced in sorted order.Elements are treated as unique based on their position, not on their value. So if the input elements are unique, there will be no repeat values in each combination.
###Code
for i in combinations('ABC', 2):
print(i)
###Output
_____no_output_____
###Markdown
If you noticed, this only returns the tuples that are lexicographically ascending. Here's another example:
###Code
for i in combinations('ABCD', 3):
print(i)
###Output
_____no_output_____
###Markdown
To read more about the `combinations()` method, check out the help output:
###Code
help(combinations)
###Output
_____no_output_____
###Markdown
4. ```pythoncombinations_with_replacement(iterable, r)```: returns `r`-length subsequences of elements of the `iterable` where individual elements may repeat
###Code
from itertools import combinations_with_replacement as cwr
###Output
_____no_output_____
###Markdown
For example:
###Code
alph_lst = ['A', 'B', 'C']
for i in cwr(alph_lst, 2):
print(i)
###Output
_____no_output_____
###Markdown
To read more about the `combinations_with_replacement()` method, check out the help output:
###Code
help(cwr)
###Output
_____no_output_____ |
run_optuna_tuning_xgboost_trade_colab_gpu_1thread_per_case_use_16thread_to_parallel_execution_at_colab_gpu_18_iteration_2000_browser_mac_use_previous_db_after_7leg_0506start_main_account_inprog1.ipynb | ###Markdown
###Code
%cd /root/
!wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz
!tar -xzvf ta-lib-0.4.0-src.tar.gz
%cd ta-lib
!./configure --prefix=/usr
!make
!make install
!pip install ta-lib
from google import colab
colab.drive.mount('/content/gdrive')
%cd '/content/gdrive/My Drive/gcolab_workdir_xgboost/'
!pip uninstall -y xgboost
!pip install xgboost
%cd '/content/gdrive/My Drive/gcolab_workdir_xgboost/'
!rm -rf fx_systrade
!git clone -b for_try_keras_trade_learning_at_google_colab https://github.com/ryogrid/fx_systrade.git --depth 1
!cp tr_input_mat.pickle fx_systrade/
!cp tr_angle_mat.pickle fx_systrade/
%cd 'fx_systrade'
!pip install pytz
!pip install optuna
#!optuna create-study --study 'fxsystrade' --storage 'sqlite:///../fxsystrade.db'
!pip list
import xgboost_trade_colab_gpu
xgboost_trade_colab_gpu.run_script("CHANGE_TO_PARAM_TUNING_MODE")
xgboost_trade_colab_gpu.set_tune_trial_num(2000)
xgboost_trade_colab_gpu.set_optuna_special_parallel_num(16)
xgboost_trade_colab_gpu.set_enable_db_at_tune()
xgboost_trade_colab_gpu.run_script("TRAIN_GPU")
#xgboost_trade_colab_gpu.run_script("TRADE_COLAB_CPU")
###Output
data size of rates: 836678
num of rate datas for tarin: 522579
input features sets for tarin: 208952
|
.ipynb_checkpoints/demo-checkpoint.ipynb | ###Markdown
Blacktip Angler DemoDemonstration of Blacktip Angler use cases for analysis of SEC public company filings.
###Code
import pandas as pd
import matplotlib.pyplot as plt
from credentials import username, password
from blacktip.angler import Angler
###Output
_____no_output_____
###Markdown
LoginUsing the login created on the [Blacktip website](http://blacktipresearch.com), login to Angler.
###Code
instance = Angler(username, password)
###Output
_____no_output_____
###Markdown
Query a FormUsing the instance, query a form (e.g. 10-K or 10-Q) for a specific company and period.
###Code
ticker = "FB" #can also use CIK
period = [2015, 2016, 2017, 2018, 2019] #can also be a list
form = instance.query10K(ticker, period)
display(form.form())
###Output
_____no_output_____
###Markdown
Search the data.
###Code
display(form.asset_sheet().head())
form.filter("^NetIncomeLoss$")
###Output
_____no_output_____
###Markdown
Manipulate the DataUsing the form, we are able to display trends and calculate important metrics.
###Code
ROE = form.calc_ROE()
display(ROE)
CurrentRatio = form.calc_CurrentRatio()
display(CurrentRatio)
BookValue = form.calc_BookValue()
display(BookValue)
DebtToEquity = form.calc_DebtToEquity(as_list=True)
print(DebtToEquity)
###Output
_____no_output_____
###Markdown
Visualize Trends and Compare CompaniesCompare companies on certain values over time.
###Code
metric = "^NetIncomeLoss$"
period = list(range(2009, 2020)) # 2009, 2010, 2011, ..., 2019
amzn_AssetsCurrent = instance.query10K("amzn", period).filter(metric)
aapl_AssetsCurrent = instance.query10K("aapl", period).filter(metric)
plt.plot(period, amzn_AssetsCurrent.values[0], label="AMZN")
plt.plot(period, aapl_AssetsCurrent.values[0], label="AAPL")
plt.legend()
plt.xlabel("year")
plt.ylabel(metric)
###Output
_____no_output_____
###Markdown
Mask R-CNN DemoA quick intro to using the pre-trained model to detect and segment objects.
###Code
import os
import sys
import random
import math
import numpy as np
import skimage.io
import matplotlib
import matplotlib.pyplot as plt
# Root directory of the project
ROOT_DIR = os.getcwd()
#from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
from mrcnnn import utils
%matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
# Directory of images to run detection on
IMAGE_DIR = os.path.join("images")
###Output
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Using TensorFlow backend.
###Markdown
ConfigurationsWe'll be using a model trained on the MS-COCO dataset. The configurations of this model are in the ```CocoConfig``` class in ```coco.py```.For inferencing, modify the configurations a bit to fit the task. To do so, sub-class the ```CocoConfig``` class and override the attributes you need to change.
###Code
from mrcnn.config import Config
class CocoConfig(Config):
"""Configuration for training on MS COCO.
Derives from the base Config class and overrides values specific
to the COCO dataset.
"""
# Give the configuration a recognizable name
NAME = "coco"
# We use a GPU with 12GB memory, which can fit two images.
# Adjust down if you use a smaller GPU.
IMAGES_PER_GPU = 2
# Uncomment to train on 8 GPUs (default is 1)
# GPU_COUNT = 8
# Number of classes (including background)
NUM_CLASSES = 1 + 80 # COCO has 80 classes
class InferenceConfig(CocoConfig):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
import tensorflow as tf
print(tf.__version__)
###Output
1.13.1
###Markdown
Create Model and Load Trained Weights
###Code
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py:772: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
###Markdown
Class NamesThe model classifies objects and returns class IDs, which are integer value that identify each class. Some datasets assign integer values to their classes and some don't. For example, in the MS-COCO dataset, the 'person' class is 1 and 'teddy bear' is 88. The IDs are often sequential, but not always. The COCO dataset, for example, has classes associated with class IDs 70 and 72, but not 71.To improve consistency, and to support training on data from multiple sources at the same time, our ```Dataset``` class assigns it's own sequential integer IDs to each class. For example, if you load the COCO dataset using our ```Dataset``` class, the 'person' class would get class ID = 1 (just like COCO) and the 'teddy bear' class is 78 (different from COCO). Keep that in mind when mapping class IDs to class names.To get the list of class names, you'd load the dataset and then use the ```class_names``` property like this.``` Load COCO datasetdataset = coco.CocoDataset()dataset.load_coco(COCO_DIR, "train")dataset.prepare() Print class namesprint(dataset.class_names)```We don't want to require you to download the COCO dataset just to run this demo, so we're including the list of class names below. The index of the class name in the list represent its ID (first class is 0, second is 1, third is 2, ...etc.)
###Code
# COCO Class names
# Index of the class in the list is its ID. For example, to get ID of
# the teddy bear class, use: class_names.index('teddy bear')
class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',
'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',
'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',
'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
'kite', 'baseball bat', 'baseball glove', 'skateboard',
'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
'teddy bear', 'hair drier', 'toothbrush']
###Output
_____no_output_____
###Markdown
Run Object Detection
###Code
# Load a random image from the images folder
file_names = next(os.walk(IMAGE_DIR))[2]
image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))
# Run detection
results = model.detect([image], verbose=1)
# Visualize results
r = results[0]
visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
class_names, r['scores'])
###Output
Processing 1 images
image shape: (437, 640, 3) min: 0.00000 max: 255.00000 uint8
molded_images shape: (1, 1024, 1024, 3) min: -123.70000 max: 150.10000 float64
image_metas shape: (1, 93) min: 0.00000 max: 1024.00000 float64
anchors shape: (1, 261888, 4) min: -0.35390 max: 1.29134 float32
###Markdown
Load the corpus
###Code
import numpy as np
corpus_brtext = []
corpus_brtext_test = []
sents_set = set()
with open('br-text.txt') as f:
for l in f.readlines():
sents_set.add(l.replace('\n',''))
sents_set = list(sents_set)
sents1 = []
for i in sents_set:
sent = i.split(' ')
sents1.append(sent)
for _ in range(400):
corpus_brtext.append([])
for i in set(np.random.choice(range(len(sents1)),200)):
corpus_brtext[-1].append(sents1[i])
corpus_brtext[-1] = [[''.join(j) for j in corpus_brtext[-1]], corpus_brtext[-1]]
sents2 = []
for i in sents_set[int(len(sents_set)*0.9):]:
sent = i.split(' ')
sents2.append(sent)
sents2 = [[''.join(j) for j in sents2], sents2]
corpus_brtext_test.append(sents2)
###Output
_____no_output_____
###Markdown
Run LiB
###Code
import model
import importlib
importlib.reload(model)
model.life = 10
model.max_len = 12
model.memory_in = 0.25
model.memory_out = 0.0001
model.update_rate = 0.2
model.mini_gap = 1
model.use_skip=False
memory = model.TrieList()
corpus_train = corpus_brtext
corpus_test = corpus_brtext_test
model.init(memory, corpus_train[0][0]) # init the Lexicon memory with some unigrams in corpus
for epoch_id in range(5001):
model.run(epoch_id, memory, corpus_train, corpus_test)
###Output
0 MemLength: 131
[B] Precision: 34.67% Recall: 95.18% F1: 50.83%
[L] Precision: 8.20% Recall: 22.52% F1: 12.02%
100 MemLength: 710
[B] Precision: 65.59% Recall: 91.90% F1: 76.55%
[L] Precision: 41.72% Recall: 58.45% F1: 48.69%
200 MemLength: 899
[B] Precision: 70.42% Recall: 91.44% F1: 79.56%
[L] Precision: 48.25% Recall: 62.65% F1: 54.52%
300 MemLength: 994
[B] Precision: 74.84% Recall: 91.28% F1: 82.25%
[L] Precision: 56.06% Recall: 68.39% F1: 61.62%
400 MemLength: 1096
[B] Precision: 74.87% Recall: 91.36% F1: 82.30%
[L] Precision: 54.61% Recall: 66.63% F1: 60.02%
500 MemLength: 1162
[B] Precision: 74.61% Recall: 90.29% F1: 81.70%
[L] Precision: 54.23% Recall: 65.63% F1: 59.39%
600 MemLength: 1215
[B] Precision: 74.96% Recall: 89.95% F1: 81.77%
[L] Precision: 53.11% Recall: 63.72% F1: 57.93%
700 MemLength: 1256
[B] Precision: 76.73% Recall: 90.37% F1: 82.99%
[L] Precision: 57.42% Recall: 67.62% F1: 62.10%
800 MemLength: 1292
[B] Precision: 76.18% Recall: 90.94% F1: 82.91%
[L] Precision: 55.52% Recall: 66.28% F1: 60.43%
900 MemLength: 1321
[B] Precision: 77.66% Recall: 90.21% F1: 83.47%
[L] Precision: 57.35% Recall: 66.63% F1: 61.64%
1000 MemLength: 1365
[B] Precision: 77.80% Recall: 90.44% F1: 83.65%
[L] Precision: 57.51% Recall: 66.86% F1: 61.83%
1100 MemLength: 1400
[B] Precision: 79.77% Recall: 89.11% F1: 84.18%
[L] Precision: 59.75% Recall: 66.74% F1: 63.06%
1200 MemLength: 1425
[B] Precision: 78.18% Recall: 89.03% F1: 83.25%
[L] Precision: 57.47% Recall: 65.44% F1: 61.20%
1300 MemLength: 1456
[B] Precision: 78.78% Recall: 88.53% F1: 83.37%
[L] Precision: 57.96% Recall: 65.14% F1: 61.34%
1400 MemLength: 1477
[B] Precision: 78.43% Recall: 88.42% F1: 83.13%
[L] Precision: 57.48% Recall: 64.79% F1: 60.92%
1500 MemLength: 1501
[B] Precision: 80.57% Recall: 89.22% F1: 84.67%
[L] Precision: 60.89% Recall: 67.43% F1: 63.99%
1600 MemLength: 1507
[B] Precision: 82.14% Recall: 88.61% F1: 85.25%
[L] Precision: 63.11% Recall: 68.08% F1: 65.50%
1700 MemLength: 1528
[B] Precision: 81.54% Recall: 88.30% F1: 84.79%
[L] Precision: 61.95% Recall: 67.09% F1: 64.42%
1800 MemLength: 1546
[B] Precision: 81.16% Recall: 89.07% F1: 84.93%
[L] Precision: 61.62% Recall: 67.62% F1: 64.48%
1900 MemLength: 1563
[B] Precision: 82.59% Recall: 89.37% F1: 85.85%
[L] Precision: 64.50% Recall: 69.80% F1: 67.05%
2000 MemLength: 1568
[B] Precision: 81.69% Recall: 88.34% F1: 84.89%
[L] Precision: 61.79% Recall: 66.82% F1: 64.21%
2100 MemLength: 1579
[B] Precision: 82.38% Recall: 89.56% F1: 85.82%
[L] Precision: 63.96% Recall: 69.53% F1: 66.63%
2200 MemLength: 1597
[B] Precision: 81.95% Recall: 89.37% F1: 85.50%
[L] Precision: 63.51% Recall: 69.27% F1: 66.26%
2300 MemLength: 1606
[B] Precision: 81.59% Recall: 88.11% F1: 84.73%
[L] Precision: 61.13% Recall: 66.02% F1: 63.48%
2400 MemLength: 1625
[B] Precision: 81.65% Recall: 87.96% F1: 84.69%
[L] Precision: 61.85% Recall: 66.63% F1: 64.15%
2500 MemLength: 1637
[B] Precision: 80.95% Recall: 87.23% F1: 83.97%
[L] Precision: 59.28% Recall: 63.88% F1: 61.49%
2600 MemLength: 1643
[B] Precision: 81.76% Recall: 89.45% F1: 85.43%
[L] Precision: 63.24% Recall: 69.19% F1: 66.08%
2700 MemLength: 1652
[B] Precision: 81.53% Recall: 89.41% F1: 85.29%
[L] Precision: 62.81% Recall: 68.88% F1: 65.71%
2800 MemLength: 1663
[B] Precision: 81.14% Recall: 88.95% F1: 84.87%
[L] Precision: 61.02% Recall: 66.90% F1: 63.82%
2900 MemLength: 1679
[B] Precision: 82.34% Recall: 89.98% F1: 85.99%
[L] Precision: 64.08% Recall: 70.03% F1: 66.92%
3000 MemLength: 1695
[B] Precision: 81.86% Recall: 88.30% F1: 84.96%
[L] Precision: 62.05% Recall: 66.93% F1: 64.40%
3100 MemLength: 1713
[B] Precision: 82.86% Recall: 89.26% F1: 85.94%
[L] Precision: 64.90% Recall: 69.92% F1: 67.32%
3200 MemLength: 1721
[B] Precision: 82.30% Recall: 87.96% F1: 85.03%
[L] Precision: 62.27% Recall: 66.55% F1: 64.34%
3300 MemLength: 1730
[B] Precision: 81.94% Recall: 88.80% F1: 85.23%
[L] Precision: 63.10% Recall: 68.39% F1: 65.64%
3400 MemLength: 1742
[B] Precision: 82.61% Recall: 88.26% F1: 85.34%
[L] Precision: 63.33% Recall: 67.66% F1: 65.42%
3500 MemLength: 1743
[B] Precision: 82.68% Recall: 89.26% F1: 85.85%
[L] Precision: 63.95% Recall: 69.04% F1: 66.40%
3600 MemLength: 1757
[B] Precision: 81.54% Recall: 87.12% F1: 84.24%
[L] Precision: 60.68% Recall: 64.83% F1: 62.69%
3700 MemLength: 1765
[B] Precision: 82.27% Recall: 86.54% F1: 84.35%
[L] Precision: 61.45% Recall: 64.64% F1: 63.00%
3800 MemLength: 1769
[B] Precision: 82.55% Recall: 87.35% F1: 84.88%
[L] Precision: 62.21% Recall: 65.83% F1: 63.97%
3900 MemLength: 1786
[B] Precision: 81.79% Recall: 88.61% F1: 85.06%
[L] Precision: 61.50% Recall: 66.63% F1: 63.96%
4000 MemLength: 1792
[B] Precision: 82.62% Recall: 87.39% F1: 84.93%
[L] Precision: 62.49% Recall: 66.09% F1: 64.24%
4100 MemLength: 1798
[B] Precision: 83.14% Recall: 86.93% F1: 84.99%
[L] Precision: 63.22% Recall: 66.09% F1: 64.62%
4200 MemLength: 1807
[B] Precision: 82.67% Recall: 87.35% F1: 84.94%
[L] Precision: 62.66% Recall: 66.21% F1: 64.39%
4300 MemLength: 1815
[B] Precision: 82.29% Recall: 88.46% F1: 85.26%
[L] Precision: 62.91% Recall: 67.62% F1: 65.18%
4400 MemLength: 1814
[B] Precision: 80.96% Recall: 88.07% F1: 84.36%
[L] Precision: 60.26% Recall: 65.56% F1: 62.80%
4500 MemLength: 1814
[B] Precision: 81.13% Recall: 87.77% F1: 84.32%
[L] Precision: 60.35% Recall: 65.29% F1: 62.72%
4600 MemLength: 1823
[B] Precision: 82.34% Recall: 88.42% F1: 85.27%
[L] Precision: 63.12% Recall: 67.78% F1: 65.36%
4700 MemLength: 1831
[B] Precision: 79.83% Recall: 86.39% F1: 82.98%
[L] Precision: 56.38% Recall: 61.01% F1: 58.60%
4800 MemLength: 1830
[B] Precision: 82.14% Recall: 87.92% F1: 84.93%
[L] Precision: 62.00% Recall: 66.36% F1: 64.11%
4900 MemLength: 1844
[B] Precision: 81.50% Recall: 86.58% F1: 83.97%
[L] Precision: 60.06% Recall: 63.80% F1: 61.87%
5000 MemLength: 1847
[B] Precision: 82.53% Recall: 86.31% F1: 84.38%
[L] Precision: 61.70% Recall: 64.53% F1: 63.08%
###Markdown
See the head entities in the L
###Code
memory[:50]
article, article_raw = corpus_train[2]
onset, end = 10, 20
print('---\nchunks\n---')
model.show_result(memory, article_raw[onset:end], article[onset:end], decompose=False)
print('---\nsubchunks\n---')
model.show_result(memory, article_raw[onset:end], article[onset:end], decompose=True)
###Output
---
chunks
---
can you make a tower with what you have
canyou make atower with whatyou have
you can get down by yourself see she has her
youcan getdown b y your self seeshehas her
pajamas on don't honey you'll break it
p a j am as on don't honey you 'll br eak it
numbers are those slippers no what that's a
numbers arethose s l i pper s now hat that'sa
---
subchunks
---
can you make a tower with what you
canyou make atower with what you
have you can get down by yourself
have youcan get down b y your self
see she has her pajamas on don't honey
seeshehas her p a j am as on don't honey
you'll break it numbers are those slippers
you 'll br eak it numbers are those s l i p per s
no what that's a bird which color
now h at that's a bird which color
###Markdown
Category Encodershttp://contrib.scikit-learn.org/category_encoders/index.htmlA set of scikit-learn-style transformers for encoding categorical variables into numeric with different techniques
###Code
import category_encoders as ce
import numpy as np
import pandas as pd
df = pd.DataFrame({'a':[11,15,5,4,5,7,8,14,10,10],
'b':[1,1,1,2,2,2,3,3,3,3],
'c':[10,10,11,11,2,2,2,4,4,4]
})
df
y = np.array([11,10,9,8.7,9.1,10,11,8.5,9,10])
df
encoder = ce.CountEncoder()
dd = encoder.fit_transform(df['a'])
dd
df.to_csv('../demo/demo.csv',index=False)
enc = ce.CountEncoder(cols=['a','c']).fit(df)
df0 = enc.transform(df)
df0
enc = ce.CatBoostEncoder(cols=['a','c']).fit(df,y)
df0 = enc.transform(df)
df0
enc = ce.BinaryEncoder(cols=['a','c']).fit(df)
df0 = enc.transform(df)
df0
enc = ce.BaseNEncoder(cols=['a','c']).fit(df,y)
df0 = enc.transform(df)
df0
?encoder
import pandas as pd
from sklearn.datasets import load_boston
from category_encoders import CountEncoder
bunch = load_boston()
y = bunch.target
X = pd.DataFrame(bunch.data, columns=bunch.feature_names)
enc = CountEncoder(cols=['CHAS', 'RAD']).fit(X, y)
numeric_dataset = enc.transform(X)
X
numeric_dataset
from xgboost import XGBClassifier
?XGBClassifier
###Output
_____no_output_____
###Markdown
xgb gpu版本:christophm.github.io/interpretable-ml-book/
###Code
import xgboost as xgb
from sklearn.datasets import load_boston
boston = load_boston()
# XGBoost API example
params = {'tree_method': 'gpu_hist', 'max_depth': 3, 'learning_rate': 0.1}
dtrain = xgb.DMatrix(boston.data, boston.target)
xgb.train(params, dtrain, evals=[(dtrain, "train")])
# sklearn API example
gbm = xgb.XGBRegressor(silent=False, n_estimators=10, tree_method='gpu_hist')
gbm.fit(boston.data, boston.target, eval_set=[(boston.data, boston.target)])
###Output
[0] train-rmse:21.6024
[1] train-rmse:19.5552
[2] train-rmse:17.715
[3] train-rmse:16.062
[4] train-rmse:14.5715
[5] train-rmse:13.2409
[6] train-rmse:12.0339
[7] train-rmse:10.9579
[8] train-rmse:9.97879
[9] train-rmse:9.10759
[15:26:13] WARNING: C:/Jenkins/workspace/xgboost-win64_release_0.90/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[0] validation_0-rmse:21.6024
[1] validation_0-rmse:19.5552
[2] validation_0-rmse:17.715
[3] validation_0-rmse:16.062
[4] validation_0-rmse:14.5715
[5] validation_0-rmse:13.2409
[6] validation_0-rmse:12.0339
[7] validation_0-rmse:10.9579
[8] validation_0-rmse:9.97879
[9] validation_0-rmse:9.10759
###Markdown
This demonstrates how to use Polynomial2D(). Our first example is to use Polynomial2D().test() to simulate an example. Second, we use a read data set. 1. Polynomial2D().test()
###Code
##############################
# to import Polynomial2D()
from polynomial2d.polynomial2d import Polynomial2D
##############################
import matplotlib.pyplot as plt
import copy
obj = Polynomial2D()
obj.data,obj.model
# calling Polynomial2D() construction a template.
obj.test(nsize=10,norder=1) # test() to simulate x1,x2,coef
obj.compute() # to compute obj.model['YFIT']
obj.data,obj.model
# next we demonstrate usint fit()
testobj = copy.deepcopy(obj)
testobj.data['Y'] = testobj.model['YFIT'].copy()
testobj.model['YFIT'] = None
testobj.model['COEF'] = None
testobj.data,testobj.model
# fit()
testobj.model['NORDER'] = 1
testobj.fit()
testobj.model['COEF'],obj.model['COEF']
# compare that fitted coefs are correct.
###Output
##########
##########
Iteration 1
###Markdown
2. Demonstrate with a real dataWe use idlk04bcq_flt.fits downloaded from MAST. This is a grism image. Our objective here is to estimate the background underneath an object around pixX = [500:675], pixY = [530,560].We provide some parameters here.
###Code
xref,yref = 488,542
bb0x,bb0y = 502,534
bb1x,bb1y = 684,553
padxleft,xwidth,padxright = 5,int(bb1x-xref),5
padylow,halfdy,padyup = 10,3,15
import numpy as np
import matplotlib.pyplot as plt
import copy
from astropy.io import fits
import os
cwd = os.getcwd()
filename = './idlk04bcq_flt.fits'
os.chdir(cwd)
tmp = fits.open(filename)
tmpdata = tmp[1].data.copy()
m = np.where(np.isfinite(tmpdata))
vmin,vmax = np.percentile(tmpdata[m],5.),np.percentile(tmpdata[m],99.)
plt.imshow(tmpdata,origin='lower',cmap='viridis',vmin=vmin,vmax=vmax)
plt.xlim(xref-padxleft,xref+xwidth+padxright)
plt.ylim(yref-halfdy-padylow,yref+halfdy+padyup)
# keep the section in Polynomial2D()
obj = Polynomial2D()
##########
# create 2D x1,x2 grids
tmpx = int(xref-padxleft)
tmppx = int(xref+xwidth+padxright)
x1 = np.arange(tmpx,tmppx)
tmpy = int(yref-halfdy-padylow)
tmppy = int(1+yref+halfdy+padyup)
x2 = np.arange(tmpy,tmppy)
x1,x2 = np.meshgrid(x1,x2)
obj.data['X1'] = x1.copy()
obj.data['X2'] = x2.copy()
##########
# cut the image for y
tmp = fits.open(filename)
tmpdata = tmp[1].data.copy()
obj.data['Y'] = tmpdata[tmpy:tmppy,tmpx:tmppx]
##########
# cut the data quality 'DQ' as mask
tmpdq = tmp['DQ'].data.copy()
tmp = np.full_like(tmpdq,True,dtype=bool)
m = np.where(tmpdq==0)
tmp[m] = False
obj.data['MASK'] = tmp[tmpy:tmppy,tmpx:tmppx]
# 3D plots of y and mask (= 0 for good data)
fig = plt.figure(figsize=(10,10))
ax = plt.axes(projection='3d')
tmp = copy.deepcopy(obj.data)
ax.plot_surface(tmp['X1'],tmp['X2'],tmp['Y'],
cmap='viridis'
)
ax.set_xlabel('X1')
ax.set_ylabel('X2')
ax.set_zlabel('Y')
ax.view_init(45,-90)
fig = plt.figure(figsize=(10,10))
ax = plt.axes(projection='3d')
tmp = copy.deepcopy(obj.data)
ax.plot_surface(tmp['X1'],tmp['X2'],tmp['MASK'].astype(int),
cmap='Greys'
)
ax.set_xlabel('X1')
ax.set_ylabel('X2')
ax.set_zlabel('MASK = 1')
ax.view_init(90,-90)
# fit()
obj.model['NORDER'] = 4
obj.fit()
obj.compute()
# 3D plots of y and yfit
fig = plt.figure(figsize=(10,10))
ax = plt.axes(projection='3d')
tmp = copy.deepcopy(obj.data)
ax.plot_surface(tmp['X1'],tmp['X2'],tmp['Y'],
cmap='viridis',alpha=0.4
)
ax.plot_surface(tmp['X1'],tmp['X2'],obj.model['YFIT'],
cmap='Greys'
)
ax.set_xlabel('X1')
ax.set_ylabel('X2')
ax.set_zlabel('Y')
ax.view_init(40,-45)
# 3D plots of y - yfit
fig = plt.figure(figsize=(10,10))
ax = plt.axes(projection='3d')
tmp = copy.deepcopy(obj.data)
ax.plot_surface(tmp['X1'],tmp['X2'],tmp['Y'] - obj.model['YFIT'],
cmap='Greys'
)
ax.set_xlabel('X1')
ax.set_ylabel('X2')
ax.set_zlabel('Y')
ax.view_init(90,-90)
###Output
_____no_output_____
###Markdown
Physics informed Bayesian network for matching process / variable / performance in solar cells. BayesProcess is a python program for process optimization of soalr cells using Physics informed Bayesian network inference with a neural network surrogate model.Detailed information about the model structure are from the following paper:*Ren and Oveido et.al , Physics-guided characterization and optimization of solar cells using surrogatemachine learning model, IEEE PVSC46, 2019**Ren and Oveido et.al , Embedding Physics Domain Knowledge into a Bayesian Network Enables Layer-by-Layer Process Innovation for Photovoltaics, accepted in npj computational materials* Below shows the schematic of our Bayesian-network-based process-optimization model, featuring atwo-step Bayesian inference that first links process conditions to materials descriptors, then the latter to device performance ![title](https://github.com/PV-Lab/BayesProcess/blob/master/Pictures/1.PNG) The model consists of two main parts: 1. NN Surrogate model for denoising experimental JV curves and predicting JV curves from material descriptors 2. Two step Bayesian inference(Bayesian network) to map process conditions to material properties We will show how the surrogate model works first. The surrogate models replaces numerical PDE with NN that enables >100x accelerating in computation speed and capability of handling noisy data. Model schematic are shown below. ![title](https://github.com/PV-Lab/BayesProcess/blob/master/Pictures/2.PNG) 1. NN Surrogate model for denoising experimental JV curves and predicting JV curves from material descriptors Libraries and dependencies:
###Code
from keras import backend as K
from keras.models import Model
from keras.callbacks import ReduceLROnPlateau
from keras.layers import Input, Dense, Lambda,Conv1D,Conv2DTranspose, LeakyReLU,Activation,Flatten,Reshape
import matplotlib.pyplot as plt
import numpy as np
import os
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from keras.models import load_model
from emcee import PTSampler
import warnings
warnings.filterwarnings('ignore')
plt.rcParams["figure.figsize"] = [8, 6]
plt.rcParams.update({'font.size': 16})
plt.rcParams["font.family"] = "calibri"
###Output
_____no_output_____
###Markdown
Load data and preprocess
###Code
# Load simulated and unormalized JV dataset
JV_raw = np.loadtxt('./Dataset/GaAs_sim_nJV.txt')
# Load material parameters that generated the JV dataset
par = np.loadtxt('./Dataset/GaAs_sim_label.txt')
def Conv1DTranspose(input_tensor, filters, kernel_size, strides ):
x = Lambda(lambda x: K.expand_dims(x, axis=2))(input_tensor)
x = Conv2DTranspose(filters=filters, kernel_size=(kernel_size, 1), strides=(strides, 1),padding='SAME')(x)
x = Lambda(lambda x: K.squeeze(x, axis=2))(x)
return x
#Covert labels from log10 form to log
def log10_ln(x):
return np.log(np.power(10,x))
par = log10_ln(par)
#Data normalization for the whole JV dataset
def min_max(x):
min = np.min(x)
max = np.max(x)
return (x-min)/(max-min),max,min
#Normalize raw JV data
JV_norm,JV_max,JV_min = min_max(JV_raw)
#Normalize JV descriptors column-wise
scaler = MinMaxScaler()
par_n = scaler.fit_transform(par)
#create training and testing datset
X_train, X_test, y_train, y_test = train_test_split(JV_norm,par_n, test_size=0.2)
#add in Gaussian noise to train the denoising Autoencoder
X_train_nos = X_train+0.002 * np.random.normal(loc=0.0, scale=1.0, size=X_train.shape)
X_test_nos = X_test+0.002 * np.random.normal(loc=0.0, scale=1.0, size=X_test.shape)
###Output
_____no_output_____
###Markdown
Let's take a look at our data
###Code
plt.plot(X_train[0,:])
plt.xlabel('voltage(a.u.)')
plt.ylabel('current(a.u.)')
###Output
_____no_output_____
###Markdown
build the denosiing Autoencoder
###Code
input_dim = X_train.shape[1]
label_dim = y_train.shape[1]
#JVi dim
x = Input(shape=(input_dim,))
#materail descriptor dim
y = Input(shape =(label_dim,))
# Network Parameters
max_filter = 256
strides = [5,2,2]
kernel = [7,5,3]
Batch_size = 128
#build the encoder
def encoder(x):
x = Lambda(lambda x: K.expand_dims(x, axis=2))(x)
en0 = Conv1D(max_filter//4,kernel[0],strides= strides[0], padding='SAME')(x)
en0 = LeakyReLU(0.2)(en0)
en1 = Conv1D(max_filter//2,kernel[1],strides=strides[1], padding='SAME')(en0)
en1 = LeakyReLU(0.2)(en1)
en2 = Conv1D(max_filter,kernel[2], strides=strides[2],padding='SAME')(en1)
en2 = LeakyReLU(0.2)(en2)
en3 = Flatten()(en2)
en3 = Dense(100,activation = 'relu')(en3)
z = Dense(label_dim,activation = 'linear')(en3)
return z
z = encoder(x)
encoder_ = Model(x,z)
map_size = K.int_shape(encoder_.layers[-4].output)[1]
#build the decoder
z1 = Dense(100,activation = 'relu')(z)
z1 = Dense(max_filter*map_size,activation='relu')(z1)
z1 = Reshape((map_size,1,max_filter))(z1)
z2 = Conv2DTranspose( max_filter//2, (kernel[2],1), strides=(strides[2],1),padding='SAME')(z1)
z2 = Activation('relu')(z2)
z3 = Conv2DTranspose(max_filter//4, (kernel[1],1), strides=(strides[1],1),padding='SAME')(z2)
z3 = Activation('relu')(z3)
z4 = Conv2DTranspose(1, (kernel[0],1), strides=(strides[0],1),padding='SAME')(z3)
decoded_x = Activation('sigmoid')(z4)
decoded_x = Lambda(lambda x: K.squeeze(x, axis=2))(decoded_x)
decoded_x = Lambda(lambda x: K.squeeze(x, axis=2))(decoded_x)
#Denoising autoencoder
ae = Model(inputs= x,outputs= decoded_x)
#ae loss
def ae_loss(x, decoded_x):
ae_loss = K.mean(K.sum(K.square(x- decoded_x),axis=-1))
return ae_loss
ae.compile(optimizer = 'adam', loss= ae_loss)
reduce_lr = ReduceLROnPlateau(monitor = 'loss', factor=0.5,
patience=5, min_lr=0.00001)
ae.fit(X_train_nos,X_train,shuffle=True,
batch_size=128,epochs = 50,
validation_split=0.0, validation_data=None, callbacks=[reduce_lr])
###Output
Epoch 1/50
15999/15999 [==============================] - 1s 93us/step - loss: 2.8392
Epoch 2/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.3477
Epoch 3/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.1146
Epoch 4/50
15999/15999 [==============================] - 1s 54us/step - loss: 0.1071
Epoch 5/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0948
Epoch 6/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0735
Epoch 7/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0357
Epoch 8/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0277
Epoch 9/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0145
Epoch 10/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0112
Epoch 11/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0088
Epoch 12/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0085
Epoch 13/50
15999/15999 [==============================] - 1s 54us/step - loss: 0.0057
Epoch 14/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0105
Epoch 15/50
15999/15999 [==============================] - 1s 54us/step - loss: 0.0042
Epoch 16/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0058
Epoch 17/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0042
Epoch 18/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0043
Epoch 19/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0039
Epoch 20/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0076
Epoch 21/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0029
Epoch 22/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0122
Epoch 23/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0036
Epoch 24/50
15999/15999 [==============================] - 1s 54us/step - loss: 0.0026
Epoch 25/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0039
Epoch 26/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0045
Epoch 27/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0037
Epoch 28/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0037
Epoch 29/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0042
Epoch 30/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0016
Epoch 31/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0013
Epoch 32/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0013
Epoch 33/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0012
Epoch 34/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0013
Epoch 35/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0012
Epoch 36/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0012
Epoch 37/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0012
Epoch 38/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0013
Epoch 39/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0014
Epoch 40/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0015
Epoch 41/50
15999/15999 [==============================] - 1s 53us/step - loss: 0.0019
Epoch 42/50
15999/15999 [==============================] - 1s 53us/step - loss: 9.5818e-04
Epoch 43/50
15999/15999 [==============================] - 1s 53us/step - loss: 9.3389e-04
Epoch 44/50
15999/15999 [==============================] - 1s 53us/step - loss: 9.1527e-04
Epoch 45/50
15999/15999 [==============================] - 1s 53us/step - loss: 8.7477e-04
Epoch 46/50
15999/15999 [==============================] - 1s 53us/step - loss: 8.6421e-04
Epoch 47/50
15999/15999 [==============================] - 1s 53us/step - loss: 8.8106e-04
Epoch 48/50
15999/15999 [==============================] - 1s 53us/step - loss: 7.7422e-04
Epoch 49/50
15999/15999 [==============================] - 1s 53us/step - loss: 7.5253e-04
Epoch 50/50
15999/15999 [==============================] - 1s 53us/step - loss: 7.4032e-04
###Markdown
plot the nosiy JVi and reconstructed JVi
###Code
x_test_decoded= ae.predict(X_test_nos)
rand_ind = np.random.randint(0,100)
plt.plot(x_test_decoded[rand_ind,:],label='AE')
plt.plot(X_test_nos[rand_ind,:],'--',label='raw')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
build the regression model using the same structure of decoder
###Code
z_in = Input(shape=(label_dim,))
z1 = Dense(100,activation = 'relu')(z_in)
z1 = Dense(max_filter*map_size,activation='relu')(z1)
z1 = Reshape((map_size,1,max_filter))(z1)
z2 = Conv2DTranspose( max_filter//2, (kernel[2],1), strides=(strides[2],1),padding='SAME')(z1)
z2 = Activation('relu')(z2)
z3 = Conv2DTranspose(max_filter//4, (kernel[1],1), strides=(strides[1],1),padding='SAME')(z2)
z3 = Activation('relu')(z3)
z4 = Conv2DTranspose(1, (kernel[0],1), strides=(strides[0],1),padding='SAME')(z3)
decoded_x = Activation('sigmoid')(z4)
decoded_x = Lambda(lambda x: K.squeeze(x, axis=2))(decoded_x)
decoded_x = Lambda(lambda x: K.squeeze(x, axis=2))(decoded_x)
reg = Model(z_in,decoded_x)
reg.compile(loss='mse',optimizer='adam')
reg.fit(y_train,X_train,shuffle=True,batch_size=128,epochs = 50,
validation_split=0.0, validation_data=None)
y_hat_train = reg.predict(y_train)
y_hat_test = reg.predict(y_test)
#voltage sweep
v_sweep = np.linspace (0,1.1,100)
v_total =np.tile(v_sweep,5).reshape(1,-1)
mse = mean_squared_error
mse_train = mse(y_hat_train,X_train)
mse_test = mse(y_hat_test,X_test)
print ('train mse: %.6f' % (mse_train))
print ('test mse: %.6f' % (mse_test))
###Output
train mse: 0.000049
test mse: 0.000048
###Markdown
save the denoised AE and regression model
###Code
ae.save('./TrainedModel/GaAs_AE.h5')
reg.save('./TrainedModel/GaAs_reg.h5')
###Output
_____no_output_____
###Markdown
2. Two step Bayesian inference(Bayesian network) to map process conditions to materaal propertiesArchitecture of our Bayesian inference network, to identify new windows for process optimization.![title](https://github.com/PV-Lab/BayesProcess/blob/master/Pictures/3.PNG) Load process parameters and experimental data
###Code
#MOCVD growth tempearture
Temp = np.array([530,580,630,650,680])
#convert Tempearture to -1/T*1000 for Arrhenius equation input
x = -1000/(np.array(Temp))
JV_exp =np.loadtxt('./Dataset/GaAs_exp_nJV.txt')
par = np.loadtxt('./Dataset/GaAs_sim_label.txt')
plt.plot(JV_exp[0,:])
plt.xlabel('voltage (a.u.)')
plt.ylabel('current (a.u.)')
###Output
_____no_output_____
###Markdown
denoise experimetnal JV using AE
###Code
JV_exp = ae.predict(JV_exp)
plt.plot(JV_exp[0,:],color='red')
plt.xlabel('voltage (a.u.)')
plt.ylabel('current (a.u.)')
###Output
_____no_output_____
###Markdown
Preprocessing of data: Normalize JV descriptors column-wise
###Code
par_n = scaler.fit_transform(par)
###Output
_____no_output_____
###Markdown
Setting up MCMC for the two step Bayesian inference Define the Bayesian inference frame work
###Code
#define the lognormal pdf
def log_norm_pdf(y,mu,sigma):
return -0.5*np.sum((y-mu)**2/sigma)+np.log(sigma)
#define the logprobability based on Arrhenius equation
###Output
_____no_output_____
###Markdown
Embedding domain knowledge to Prior in Bayesian inferenceWe parameterize the prior in Arrenhius equation form with a temperature dependent pre-exponential factorWe use the pretrained NN model as the likelihood function in Bayesian infernece framework
###Code
#define the lognormal pdf
def log_norm_pdf(y,mu,sigma):
return -0.5*np.sum((y-mu)**2/sigma)+np.log(sigma)
#define the logprobability based on Arrhenius equation
def log_probability(theta,x,y,sigma):
a1,b1, c1, a2,b2,c2, a3,b3,c3, a4,b4,c4, a5,b5,c5 = theta
emitter_doping = a1*np.log(-1/x)+b1*x+c1
back_doping = a2*np.log(-1/x)+b2*x+c2
tau = (a3*np.log(-1/x)+b3*x+c3)
fsrv = (a4*np.log(-1/x)+b4*x+c4)
rsrv = (a5*np.log(-1/x)+b5*x+c5)
#stack all 5 materail descriptors
par_input = 10*np.stack((emitter_doping,back_doping,tau,fsrv,rsrv),axis=-1)
coeff = [a1,b1,c1,a2,b2,c2,a3,b3,c3,a4,b4,c4,a5,b5,c5]
#setting prior and constraints
if all(-10<x<10 for x in coeff) and max(np.abs(coeff[0::3]))<5:
if np.max(par_input)<1 and np.min(par_input)>0:
sim_curves= reg.predict(par_input)
return log_norm_pdf(sim_curves, y,sigma)
return -np.inf
return -np.inf
def logp(x):
return 0.0
###Output
_____no_output_____
###Markdown
Training Parameters
###Code
sigma = 1e-4
ntemp = 10
nruns = 2000
Temp_i = 0
#initialize the chian with a=0, b=0, c=0.5
pos = np.tile((0,0,0.5),5)/10+1e-4*np.random.randn(ntemp,64, 15)
ntemps, nwalkers, ndim = pos.shape
###Output
_____no_output_____
###Markdown
Perform the MCMC run
###Code
#first MCMC chain
sampler = PTSampler(ntemps,nwalkers, ndim, log_probability,logp, loglargs=(x, JV_exp, sigma))
sampler.run_mcmc(pos, nruns )
samples = sampler.chain
#use the values obtained in the first MCMC chain to update the inistal estimate
pos_update = samples[:,:,-1,:]+1e-5*np.random.randn(ntemp,64, 15)
sampler.reset()
#second MCM chain
sampler = PTSampler(ntemps,nwalkers, ndim, log_probability,logp, loglargs=(x, JV_exp, sigma))
sampler.run_mcmc(pos_update, nruns);
flat_samples = sampler.flatchain
zero_flat_samples = flat_samples[Temp_i,:,:]
zero_samples = samples[Temp_i,:,:,:]
###Output
_____no_output_____
###Markdown
Visulaization of parameters and loss
###Code
#visulize loss
plt.figure()
plt.plot(-1*zero_flat_loss[1,:])
plt.xlabel('run number')
plt.ylabel('loss')
#function to show the predicted JV
def check_plot(theta,x,sim):
a1,b1, c1, a2,b2,c2, a3,b3,c3, a4,b4,c4, a5,b5,c5 = theta
emitter_doping = a1*np.log(-1/x)+b1*x+c1
back_doping = a2*np.log(-1/x)+b2*x+c2
tau = (a3*np.log(-1/x)+b3*x+c3)
fsrv = (a4*np.log(-1/x)+b4*x+c4)
rsrv = (a5*np.log(-1/x)+b5*x+c5)
par_input = 10*np.stack((emitter_doping,back_doping,tau,fsrv,rsrv),axis=-1)
if sim == 0 :
unnorm_par = scaler.inverse_transform(par_input)
return par_input,unnorm_par
sim_curves= reg.predict(par_input)
return sim_curves, par_input
sim_JVs,_ = check_plot(flat_samples[Temp_i,-1,:],x,1)
###Output
_____no_output_____
###Markdown
check the fitted JV curves
###Code
fig,ax = plt.subplots(5,1)
for i in range(5):
ax[i,].plot(sim_JVs[i,:],'--')
ax[i,].plot(JV_exp[i,:])
#Extract materail properties in a finer (-1/T) grid
x_step = np.linspace(min(x),max(x),50)
par_in = []
for i in range(zero_flat_samples.shape[0]):
_,par_input = check_plot(zero_flat_samples[i,:],x_step,0)
par_in.append(par_input)
par_in= np.array(par_in)
#discard the values obtained at the begeinning of the chain
par_in = par_in[2000:,:,:]
par_in = (np.exp(par_in))
plt.xlabel('voltage (a.u.)')
plt.ylabel('current (a.u.)')
###Output
_____no_output_____
###Markdown
Plot materials parameters vs process conditions
###Code
################################################################
#plotting the materail properties vs temperature
################################################################
def plot_uncertain(x,y):
mu = np.mean(y,axis = 0)
std = np.std(y, axis = 0)
plt.fill_between(x, mu+std,mu-std,alpha=0.1,color='grey')
plt.plot(x,mu,color='black')
plt.rcParams["figure.figsize"] = [8, 10]
plt.rcParams.update({'font.size': 16})
plt.rcParams["font.family"] = "calibri"
fig = plt.figure()
y_label = ['Conc.[cm-3]','Conc.[cm-3]', r'$\tau$ [s]', 'SRV [cm/S]','SRV [cm/S]']
x_labels = ['-1/530' ,'-1/580','-1/630','-1/680']
title = ['Zn emitter doping' , 'Si base doping' ,'bulk lifetime','Front SRV', 'Rear SRV']
for i in range(5):
plt.subplot(5,1,i+1)
l1=plot_uncertain(x_step,par_in[:,:,i])
plt.yscale('log')
plt.ylabel(y_label[i])
plt.xticks([-1000/530,-1000/580,-1000/630,-1000/680],[])
plt.title(title[i],fontsize=15,fontweight='bold')
plt.xlim(-1000/530,-1000/680)
plt.xticks([-1000/530,-1000/580,-1000/630,-1000/680], x_labels)
plt.xlabel(r'-1/T [1/C]')
fig.align_labels()
###Output
_____no_output_____
###Markdown
This is a demo illustrating an application of the OS2D method on one image.Demo assumes the OS2D code is [installed](./INSTALL.md).
###Code
import os
import argparse
import matplotlib.pyplot as plt
import torch
import torchvision.transforms as transforms
from os2d.modeling.model import build_os2d_from_config
from os2d.config import cfg
import os2d.utils.visualization as visualizer
from os2d.structures.feature_map import FeatureMapSize
from os2d.utils import setup_logger, read_image, get_image_size_after_resize_preserving_aspect_ratio
logger = setup_logger("OS2D")
# use GPU if have available
cfg.is_cuda = torch.cuda.is_available()
###Output
_____no_output_____
###Markdown
Download the trained model (is the script does not work download from [Google Drive](https://drive.google.com/open?id=1l_aanrxHj14d_QkCpein8wFmainNAzo8) and put to models/os2d_v2-train.pth). See [README](./README.md) to get links for other released models.
###Code
!./os2d/utils/wget_gdrive.sh models/os2d_v2-train.pth 1l_aanrxHj14d_QkCpein8wFmainNAzo8
cfg.init.model = "models/os2d_v2-train.pth"
net, box_coder, criterion, img_normalization, optimizer_state = build_os2d_from_config(cfg)
###Output
2020-05-02 17:51:32,089 OS2D INFO: Building the OS2D model
2020-05-02 17:51:34,424 OS2D INFO: Creating model on one GPU
2020-05-02 17:51:34,453 OS2D INFO: Reading model file models/os2d_v2-train.pth
2020-05-02 17:51:34,543 OS2D INFO: Loaded complete model from checkpoint
2020-05-02 17:51:34,546 OS2D INFO: Cannot find 'optimizer' in the checkpoint file. Initializing optimizer from scratch.
2020-05-02 17:51:34,549 OS2D INFO: OS2D has 139 blocks of 10169478 parameters (before freezing)
2020-05-02 17:51:34,551 OS2D INFO: OS2D has 139 blocks of 10169478 trainable parameters
###Markdown
Get the image where to detect and two class images.
###Code
input_image = read_image("data/demo/input_image.jpg")
class_images = [read_image("data/demo/class_image_0.jpg"),
read_image("data/demo/class_image_1.jpg")]
class_ids = [0, 1]
###Output
_____no_output_____
###Markdown
Use torchvision to convert images to torch.Tensor and to apply normalization.
###Code
transform_image = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(img_normalization["mean"], img_normalization["std"])
])
###Output
_____no_output_____
###Markdown
Prepare the input image
###Code
h, w = get_image_size_after_resize_preserving_aspect_ratio(h=input_image.size[1],
w=input_image.size[0],
target_size=1500)
input_image = input_image.resize((w, h))
input_image_th = transform_image(input_image)
input_image_th = input_image_th.unsqueeze(0)
if cfg.is_cuda:
input_image_th = input_image_th.cuda()
###Output
_____no_output_____
###Markdown
Prepare the class images
###Code
class_images_th = []
for class_image in class_images:
h, w = get_image_size_after_resize_preserving_aspect_ratio(h=class_image.size[1],
w=class_image.size[0],
target_size=cfg.model.class_image_size)
class_image = class_image.resize((w, h))
class_image_th = transform_image(class_image)
if cfg.is_cuda:
class_image_th = class_image_th.cuda()
class_images_th.append(class_image_th)
###Output
_____no_output_____
###Markdown
Run the network with one command
###Code
with torch.no_grad():
loc_prediction_batch, class_prediction_batch, _, fm_size, transform_corners_batch = net(images=input_image_th, class_images=class_images_th)
###Output
_____no_output_____
###Markdown
Alternatively one can run the stages of the model separatly, which is convenient, e.g., for sharing class feature extraction between many input images.
###Code
# with torch.no_grad():
# feature_map = net.net_feature_maps(input_image_th)
# class_feature_maps = net.net_label_features(class_images_th)
# class_head = net.os2d_head_creator.create_os2d_head(class_feature_maps)
# loc_prediction_batch, class_prediction_batch, _, fm_size, transform_corners_batch = net(class_head=class_head,
# feature_maps=feature_map)
###Output
_____no_output_____
###Markdown
Convert image organized in batches into images organized in pyramid levels. Not needed in the demo, but essential for multiple images in a batch and multiple pyramid levels.
###Code
image_loc_scores_pyramid = [loc_prediction_batch[0]]
image_class_scores_pyramid = [class_prediction_batch[0]]
img_size_pyramid = [FeatureMapSize(img=input_image_th)]
transform_corners_pyramid = [transform_corners_batch[0]]
###Output
_____no_output_____
###Markdown
Decode network outputs into detection boxes
###Code
boxes = box_coder.decode_pyramid(image_loc_scores_pyramid, image_class_scores_pyramid,
img_size_pyramid, class_ids,
nms_iou_threshold=cfg.eval.nms_iou_threshold,
nms_score_threshold=cfg.eval.nms_score_threshold,
transform_corners_pyramid=transform_corners_pyramid)
# remove some fields to lighten visualization
boxes.remove_field("default_boxes")
# Note that the system outputs the correaltions that lie in the [-1, 1] segment as the detection scores (the higher the better the detection).
scores = boxes.get_field("scores")
###Output
_____no_output_____
###Markdown
Show class images
###Code
figsize = (8, 8)
fig=plt.figure(figsize=figsize)
columns = len(class_images)
for i, class_image in enumerate(class_images):
fig.add_subplot(1, columns, i + 1)
plt.imshow(class_image)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Show fixed number of detections that are above a certain threshold. Yellow rectangles show detection boxes. Each box has a class label and the detection scores (the higher the better the detection). Red parallelograms illustrate the affine transformations that align class images to the input image at the location of detection.
###Code
plt.rcParams["figure.figsize"] = figsize
cfg.visualization.eval.max_detections = 8
cfg.visualization.eval.score_threshold = float("-inf")
visualizer.show_detections(boxes, input_image,
cfg.visualization.eval)
###Output
_____no_output_____
###Markdown
Testing
###Code
from __future__ import division
import tensorflow as tf
import numpy as np
import os
# import scipy.misc
import PIL.Image as pil
from SfMLearner import SfMLearner
fh = open('/home/johan/Documents/Draft/SfMLearner/2011_09_26/2011_09_26_drive_0002_sync/image_02/data/0000000002.png', 'r')
flags = tf.app.flags
flags.DEFINE_integer("batch_size", 4, "The size of of a sample batch")
flags.DEFINE_integer("img_height", 128, "Image height")
flags.DEFINE_integer("img_width", 416, "Image width")
flags.DEFINE_string("dataset_dir", '/home/johan/Documents/Draft/SfMLearner/', "Dataset directory")
flags.DEFINE_string("output_dir",'/home/johan/Documents/Draft/SfMLearner/output/', "Output directory")
flags.DEFINE_string("ckpt_file", '/home/johan/Documents/Draft/SfMLearner/models/model-190532', "checkpoint file")
FLAGS = flags.FLAGS
def main(_):
with open('data/kitti/test_files_eigen.txt', 'r') as f:
test_files = f.readlines()
test_files = [FLAGS.dataset_dir + t[:-1] for t in test_files]
if not os.path.exists(FLAGS.output_dir):
os.makedirs(FLAGS.output_dir)
basename = os.path.basename(FLAGS.ckpt_file)
output_file = FLAGS.output_dir + '/' + basename
print(output_file)
sfm = SfMLearner()
sfm.setup_inference(img_height=FLAGS.img_height,
img_width=FLAGS.img_width,
batch_size=FLAGS.batch_size,
mode='depth')
saver = tf.train.Saver([var for var in tf.model_variables()])
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
with tf.Session(config=config) as sess:
saver.restore(sess, FLAGS.ckpt_file)
pred_all = []
for t in range(0, len(test_files), FLAGS.batch_size):
if t % 100 == 0:
print('processing %s: %d/%d' % (basename, t, len(test_files)))
inputs = np.zeros(
(FLAGS.batch_size, FLAGS.img_height, FLAGS.img_width, 3),
dtype=np.uint8)
for b in range(FLAGS.batch_size):
idx = t + b
if idx >= len(test_files):
break
#fh = open(test_files[idx], 'r')
#print(fh)
raw_im = pil.open(test_files[idx])
scaled_im = raw_im.resize((FLAGS.img_width, FLAGS.img_height), pil.ANTIALIAS)
inputs[b] = np.array(scaled_im)
# im = scipy.misc.imread(test_files[idx])
# inputs[b] = scipy.misc.imresize(im, (FLAGS.img_height, FLAGS.img_width))
pred = sfm.inference(inputs, sess, mode='depth')
for b in range(FLAGS.batch_size):
idx = t + b
if idx >= len(test_files):
break
pred_all.append(pred['depth'][b,:,:,0])
np.save(output_file, pred_all)
if __name__ == '__main__':
tf.app.run()
print(fh)
###Output
_____no_output_____
###Markdown
Stock Price PredictorThe first step is to load the required modules to make the predictions we need.
###Code
%matplotlib notebook
import warnings
warnings.filterwarnings('ignore')
### TODO: comment the line below one you've ran it once
%run -i './src/download.py'
import sys, os, pdb
import uuid, json, time
import pandas as pd
# import predictions algorithms
from sklearn.ensemble import AdaBoostRegressor, GradientBoostingRegressor
from sklearn.multioutput import MultiOutputRegressor
sys.path.append(os.getcwd() + '/src')
# import main stocks predictor / data preprocessing file
import lib.stocks as st
import lib.visualizer as vzr
###Output
_____no_output_____
###Markdown
Configurations & ParametersBelow we set the tickers we would like to train on and the dates for starting predictions.
###Code
DATE_TRAIN_START = '2016-01-01'
DATE_TEST_START = '2018-01-01'
DATE_END = '2018-06-01'
WINDOWS = [5]
HORIZONS = [7]
TICKERS_TRAIN = ['AMZN', 'GOOGL', 'AAPL', 'NVDA', 'NFLX']
TICKERS_PREDICT = ['NFLX', 'AMZN']
###Output
_____no_output_____
###Markdown
Downloaded CSV file preview - AMZN ticker
###Code
pd.read_csv('_data/tickers/AMZN.csv').tail(10).sort_index()
###Output
_____no_output_____
###Markdown
Processed CSV file preview - AMZN ticker
###Code
tickers_datafiles = st.getStockDataFromCSV(['AMZN'], DATE_TRAIN_START, DATE_TEST_START)
tickers_datafiles[0].tail(10).sort_index()
###Output
_____no_output_____
###Markdown
The next step is to create a directory where we will save the transformed data. This is done to avoid loading many data files in memeory since our algorithm may apply multiple windows and horizons (a file for each).Once we've created a directory, we proceed to load a single data representing needed information about all the specified stocks __before__ transformation.
###Code
# create a directory with a unique ID
TRIAL_ID = uuid.uuid1()
DIRECTORY = "_trials/{}".format(TRIAL_ID)
os.makedirs(DIRECTORY)
print("Loading data for {}...".format(', '.join(TICKERS_TRAIN)))
# Merge tickers data and show some visualizations
data_files = st.loadMergedData(
WINDOWS, HORIZONS, TICKERS_TRAIN, TICKERS_PREDICT,
DATE_TRAIN_START, DATE_END, DATE_TEST_START, DIRECTORY
)
print("A new trial started with ID: {}\n".format(TRIAL_ID))
print("The data files generated are:")
print(data_files)
###Output
Loading data for AMZN, GOOGL, AAPL, NVDA, NFLX...
###Markdown
Now we create a list of regressors which we would like to use for making predictions. We will be comparing all the models we choose to test by using metrics as well as visually through graphs below:
###Code
import lib.tpot_stock_pipeline as tp
classifiers = [
('GradientBoosted', MultiOutputRegressor(GradientBoostingRegressor())),
# ('AdaBoost', MultiOutputRegressor(AdaBoostRegressor()))
('TPot', MultiOutputRegressor(tp.get_tpot_pipeline()))
]
import seaborn as sns
from IPython.display import display
import warnings
warnings.filterwarnings('ignore')
# - combine the results of each classifier along with its w + h into a response object
all_results = {}
# - train each of the models on the data and save the highest performing
# model as a pickle file
for h, w, file_path in data_files:
# Start measuing time
time_start = time.time()
# load data
finance = pd.read_csv(file_path, encoding='utf-8', header=0)
finance = finance.set_index(finance.columns[0])
finance.index.name = 'Date'
finance.index = pd.to_datetime(finance.index)
finance.sort_index()
# perform preprocessing
X_train, y_train, X_test, y_test = \
st.prepareDataForClassification(finance, DATE_TEST_START, DATE_END, TICKERS_PREDICT, h, w)
results = {}
print("Starting an iteration with a horizon of {} and a window of {}...".format(h, w))
for i, clf_ in enumerate(classifiers):
print("Training and testing the {} model...".format(clf_[0]))
# perform k-fold cross validation
results['cross_validation_%s'%clf_[0]] = \
st.performCV(X_train, y_train, 10, clf_[1], clf_[0], visualize_folds=True)
# perform predictions with testing data and record result
preds, results['accuracy_%s'%clf_[0]] = \
st.trainPredictStocks(X_train, y_train, X_test, y_test, clf_[1], DIRECTORY)
for c in preds.columns:
preds[c] = preds[c].rolling(window=5).mean()
# print("\nBelow is a sample of of the results:\n")
# display(preds.sample(5).sort_index().reindex_axis(sorted(preds.columns), axis=1))
# plot results
vzr.visualize_predictions(preds, title='Testing Data Results')
results['window'] = w
results['horizon'] = h
# Stop time counter
time_end = time.time()
results['time_lapsed'] = time_end - time_start
all_results['H%s_W%s'%(h, w)] = results
print(json.dumps(all_results, indent=4))
###Output
Starting an iteration with a horizon of 7 and a window of 5...
Training and testing the GradientBoosted model...
###Markdown
Mask R-CNN DemoA quick intro to using the pre-trained model to detect and segment objects.
###Code
import os
import sys
import random
import math
import numpy as np
import scipy.misc
import matplotlib
import matplotlib.pyplot as plt
import coco
import utils
import model as modellib
import visualize
import PIL
from PIL import Image
%matplotlib inline
# Root directory of the project
ROOT_DIR = os.getcwd()
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Path to trained weights file
# Download this file and place in the root of your
# project (See README file for details)
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Directory of images to run detection on
IMAGE_DIR = os.path.join(ROOT_DIR, "images")
###Output
tf.estimator package not installed.
tf.estimator package not installed.
###Markdown
ConfigurationsWe'll be using a model trained on the MS-COCO dataset. The configurations of this model are in the ```CocoConfig``` class in ```coco.py```.For inferencing, modify the configurations a bit to fit the task. To do so, sub-class the ```CocoConfig``` class and override the attributes you need to change.
###Code
class InferenceConfig(coco.CocoConfig):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
# config.print()
###Output
_____no_output_____
###Markdown
Create Model and Load Trained Weights
###Code
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
###Output
_____no_output_____
###Markdown
Class NamesThe model classifies objects and returns class IDs, which are integer value that identify each class. Some datasets assign integer values to their classes and some don't. For example, in the MS-COCO dataset, the 'person' class is 1 and 'teddy bear' is 88. The IDs are often sequential, but not always. The COCO dataset, for example, has classes associated with class IDs 70 and 72, but not 71.To improve consistency, and to support training on data from multiple sources at the same time, our ```Dataset``` class assigns it's own sequential integer IDs to each class. For example, if you load the COCO dataset using our ```Dataset``` class, the 'person' class would get class ID = 1 (just like COCO) and the 'teddy bear' class is 78 (different from COCO). Keep that in mind when mapping class IDs to class names.To get the list of class names, you'd load the dataset and then use the ```class_names``` property like this.``` Load COCO datasetdataset = coco.CocoDataset()dataset.load_coco(COCO_DIR, "train")dataset.prepare() Print class namesprint(dataset.class_names)```We don't want to require you to download the COCO dataset just to run this demo, so we're including the list of class names below. The index of the class name in the list represent its ID (first class is 0, second is 1, third is 2, ...etc.)
###Code
# COCO Class names
# Index of the class in the list is its ID. For example, to get ID of
# the teddy bear class, use: class_names.index('teddy bear')
class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',
'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',
'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',
'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
'kite', 'baseball bat', 'baseball glove', 'skateboard',
'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
'teddy bear', 'hair drier', 'toothbrush']
###Output
_____no_output_____
###Markdown
Run Object Detection
###Code
# Load a random image from the images folder
file_names = next(os.walk(IMAGE_DIR))[2]
image = scipy.misc.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))
# Run detection
results = model.detect([image], verbose=1)
# Visualize results
r = results[0]
# visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
# class_names, r['scores'])
visualize.save_image(image,"noiseccccbbbaaa",r['rois'], r['masks'], r['class_ids'], r['scores'],class_names)
# visualize.draw_instances(image, r['rois'], r['masks'], r['class_ids'],
# class_names, r['scores'])
###Output
Processing 1 images
image shape: (375, 500, 3) min: 0.00000 max: 255.00000
molded_images shape: (1, 1024, 1024, 3) min: -123.70000 max: 151.10000
image_metas shape: (1, 89) min: 0.00000 max: 1024.00000
###Markdown
pyKS> Calculate KS statistic for models.
###Code
#hide
from nbdev.showdoc import *
###Output
_____no_output_____
###Markdown
example 1
###Code
import pandas as pd
import numpy as np
data = pd.read_csv('refs/two_class_example.csv')
###Output
_____no_output_____
###Markdown
路径也支持联想输入
###Code
data.describe().pipe(print)
data.count().pipe(print)
###Output
y yhat
count 500.000000 5.000000e+02
mean 0.516000 5.447397e-01
std 0.500244 4.138621e-01
min 0.000000 1.794262e-07
25% 0.000000 7.289481e-02
50% 1.000000 6.569442e-01
75% 1.000000 9.794348e-01
max 1.000000 9.999965e-01
y 500
yhat 500
dtype: int64
###Markdown
`y=1`判断为好人,相应地,`yhat`普遍会高。
###Code
data["good"] = data.y
data["bad"] = 1 - data.y
data["score"] = data.yhat
# export
'''Calculation KS statistic for a model.'''
import pandas as pd
import numpy as np
def summary(df, n_group = 10):
'''Calculation KS statistic
Inspired by one WenSui Liu's blog at
https://statcompute.wordpress.com/2012/11/18/calculating-k-s-statistic-with-python/
Parmaters
---------
df: pandas.DataFrame
with M x N size.
M length is the number of bins.
N measures the number of metrics related to KS.
n_group: float
The number of cutted groups.
Returns
-------
agg2 : The DataFrame return with KS and related metrics.'''
df["bad"] = 1 - df.good
df['bucket'] = pd.qcut(df.score, n_group, duplicates = 'drop')
grouped = df.groupby('bucket', as_index = False)
agg1 = pd.DataFrame()
agg1['min_scr'] = grouped.min().score
agg1['max_scr'] = grouped.max().score
agg1['bads'] = grouped.sum().bad
agg1['goods'] = grouped.sum().good
agg1['total'] = agg1.bads + agg1.goods
agg2 = (agg1.sort_values(by = 'min_scr')).reset_index(drop = True)
agg2['odds'] = (agg2.goods / agg2.bads).apply('{0:.2f}'.format)
agg2['bad_rate'] = (agg2.bads / agg2.total).apply('{0:.2%}'.format)
agg2['ks'] = np.round(((agg2.bads / df.bad.sum()).cumsum() - (agg2.goods / df.good.sum()).cumsum()), 4) * 100
flag = lambda x: '<----' if x == agg2.ks.max() else ''
agg2['max_ks'] = agg2.ks.apply(flag)
return agg2
from pyks.ks import summary
summary(data, n_group = 10)
###Output
_____no_output_____
###Markdown
example 2
###Code
import pandas as pd
import numpy as np
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
data = pd.read_csv('refs/two_class_example.csv')
# export
'''Calculation KS statistic for a model by ROC curve.'''
import pandas as pd
import numpy as np
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
def plot(data):
'''Calculation KS statistic
Inspired by one Christoforos Anagnostopoulos's tutorial at
https://www.datacamp.com/courses/designing-machine-learning-workflows-in-python
Parmaters
---------
data: pandas.DataFrame
with y and yhat.
y is target.
yhat is prediction.'''
fpr, tpr, thres = roc_curve(data.y, data.yhat)
ks = tpr - fpr
ks_max = np.max(ks)
print(ks_max)
plt.plot(thres, ks)
plt.plot(thres, tpr)
plt.plot(thres, fpr)
plt.xlabel('Cutoff')
plt.ylabel('KS')
plt.title(str(ks_max))
plt.xlim(0,1)
plt.show()
plt.clf()
return ks_max
plot(data)
###Output
0.727689153693382
###Markdown
`save_as_text()` 参考 https://radimrehurek.com/gensim/corpora/dictionary.html
###Code
from gensim.models.wrappers import DtmModel
from gensim.utils import dict_from_corpus
# Set training parameters.
num_topics = 2
chunksize = 2000
passes = 20
iterations = 1
eval_every = None
# id2word = dictionary.id2token
# 这是空
id2word = dict_from_corpus(corpus)
model = DtmModel('refs/dtm-win64.exe', corpus=corpus, id2word=id2word, num_topics = num_topics,
time_slices=series_slices, model='fixed')
from dynamic_topic_modeling.dtm import display_topic
model_df = display_topic(timespans=len(series_slices), num_topics=num_topics, model=model, num_words=10)
model_df.head()
topics = model.show_topic(topicid=1, time=1, topn=10)
model_df.to_csv("output/demo_model_df.csv", index = False)
###Output
_____no_output_____
###Markdown
![](figure/demo_word_evolution.png)
###Code
from dynamic_topic_modeling.dtm import topic_distribution
topic_df = topic_distribution(num_topics=num_topics, model=model, time_seq=series_slices)
topic_df.to_csv("output/demo_topic_df.csv", index=False)
visualize_topics(topic_df)
###Output
_____no_output_____
###Markdown
###Code
!git clone https://github.com/rakesh4real/ocr.pytorch.git
%cd ocr.pytorch/checkpoints
!curl -O https://raw.githubusercontent.com/rakesh4real/ocr.pytorch/master/checkpoints/CTPN.pth
!curl -O https://raw.githubusercontent.com/rakesh4real/ocr.pytorch/master/checkpoints/CRNN-1010.pth
%cd ..
###Output
/content/ocr.pytorch/checkpoints
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 67.6M 100 67.6M 0 0 254M 0 --:--:-- --:--:-- --:--:-- 254M
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 43.1M 100 43.1M 0 0 219M 0 --:--:-- --:--:-- --:--:-- 219M
/content/ocr.pytorch
###Markdown
Test Preds
###Code
!python test_one.py test_images/t9.jpg
import cv2
img = cv2.imread('result.png', cv2.IMREAD_UNCHANGED)
cv2.imshow('img', img)
###Output
_____no_output_____
###Markdown
Test Detections
###Code
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
import cv2
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn.functional as F
from detect.ctpn_model import CTPN_Model
from detect.ctpn_utils import gen_anchor, bbox_transfor_inv, clip_box, filter_bbox,nms, TextProposalConnectorOriented
from detect.ctpn_utils import resize
from detect import config
prob_thresh = 0.5
height = 720
gpu = True
if not torch.cuda.is_available():
gpu = False
device = torch.device('cuda:0' if gpu else 'cpu')
weights = os.path.join(config.checkpoints_dir, 'CTPN.pth')
model = CTPN_Model()
model.load_state_dict(torch.load(weights, map_location=device)['model_state_dict'])
model.to(device)
model.eval()
def dis(image):
cv2.imshow('image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
def disp(image):
plt.figure(figsize=(10,10))
plt.imshow(image)
plt.axis('off')
plt.show()
def disp_img_at(path):
disp(cv2.imread(path))
def get_det_boxes(image, display = True, expand = True):
image = resize(image, height=height)
image_r = image.copy()
image_c = image.copy()
h, w = image.shape[:2]
image = image.astype(np.float32) - config.IMAGE_MEAN
image = torch.from_numpy(image.transpose(2, 0, 1)).unsqueeze(0).float()
with torch.no_grad():
image = image.to(device)
cls, regr = model(image)
cls_prob = F.softmax(cls, dim=-1).cpu().numpy()
regr = regr.cpu().numpy()
anchor = gen_anchor((int(h / 16), int(w / 16)), 16)
bbox = bbox_transfor_inv(anchor, regr)
bbox = clip_box(bbox, [h, w])
# print(bbox.shape)
fg = np.where(cls_prob[0, :, 1] > prob_thresh)[0]
# print(np.max(cls_prob[0, :, 1]))
select_anchor = bbox[fg, :]
select_score = cls_prob[0, fg, 1]
select_anchor = select_anchor.astype(np.int32)
# print(select_anchor.shape)
keep_index = filter_bbox(select_anchor, 16)
# nms
select_anchor = select_anchor[keep_index]
select_score = select_score[keep_index]
select_score = np.reshape(select_score, (select_score.shape[0], 1))
nmsbox = np.hstack((select_anchor, select_score))
keep = nms(nmsbox, 0.3)
# print(keep)
select_anchor = select_anchor[keep]
select_score = select_score[keep]
# text line-
textConn = TextProposalConnectorOriented()
text = textConn.get_text_lines(select_anchor, select_score, [h, w])
# expand text
if expand:
for idx in range(len(text)):
text[idx][0] = max(text[idx][0] - 10, 0)
text[idx][2] = min(text[idx][2] + 10, w - 1)
text[idx][4] = max(text[idx][4] - 10, 0)
text[idx][6] = min(text[idx][6] + 10, w - 1)
# print(text)
if display:
blank = np.zeros(image_c.shape,dtype=np.uint8)
for box in select_anchor:
pt1 = (box[0], box[1])
pt2 = (box[2], box[3])
blank = cv2.rectangle(blank, pt1, pt2, (50, 0, 0), -1)
image_c = image_c+blank
image_c[image_c>255] = 255
for detid, i in enumerate(text):
s = str(round(i[-1] * 100, 2)) + '%'
i = [int(j) for j in i]
# (0,1)+ ---------------+ (2,3)
# | text |
# (4,5)+ ---------------+ (6,7)
x1, y1 = i[0], i[1] # top-left
x2, y2 = i[6], i[7] # bottom-right
crop = image_c[y1:y2, x1:x2, :]
plt.close();
# plt.imshow(crop);
# plt.axis('off')
# plt.show();
cv2.imwrite(f'cropped/{detid}.png', crop)
cv2.line(image_c, (i[0], i[1]), (i[2], i[3]), (0, 0, 255), 2)
cv2.line(image_c, (i[0], i[1]), (i[4], i[5]), (0, 0, 255), 2)
cv2.line(image_c, (i[6], i[7]), (i[2], i[3]), (0, 0, 255), 2)
cv2.line(image_c, (i[4], i[5]), (i[6], i[7]), (0, 0, 255), 2)
cv2.putText(image_c, s, (i[0]+13, i[1]+13),
cv2.FONT_HERSHEY_SIMPLEX,
0.6,
(255,0,0),
2,
cv2.LINE_AA)
# dis(image_c)
# print(text)
return text,image_c,image_r
if __name__ == '__main__':
img_path = '/content/ocr.pytorch/test_images/t9.jpg'
image = cv2.imread(img_path)
print
text,image_c, image_r = get_det_boxes(image)
# predictions
# print("="*100+"\n", text, "\n"+"="*100)
disp(image_r) # original
disp(image_c) # detected
###Output
_____no_output_____
###Markdown
Test Predictions
###Code
import torch.nn as nn
# import torchvision.models as models
import torch, os
from PIL import Image
import cv2
import torchvision.transforms as transforms
from torch.autograd import Variable
import numpy as np
import random
from recognize.crnn import CRNN
from recognize import config
# copy from mydataset
class resizeNormalize(object):
def __init__(self, size, interpolation=Image.LANCZOS, is_test=True):
self.size = size
self.interpolation = interpolation
self.toTensor = transforms.ToTensor()
self.is_test = is_test
def __call__(self, img):
w, h = self.size
w0 = img.size[0]
h0 = img.size[1]
if w <= (w0 / h0 * h):
img = img.resize(self.size, self.interpolation)
img = self.toTensor(img)
img.sub_(0.5).div_(0.5)
else:
w_real = int(w0 / h0 * h)
img = img.resize((w_real, h), self.interpolation)
img = self.toTensor(img)
img.sub_(0.5).div_(0.5)
tmp = torch.zeros([img.shape[0], h, w])
start = random.randint(0, w - w_real - 1)
if self.is_test:
start = 0
tmp[:, :, start:start + w_real] = img
img = tmp
return img
# copy from utils
class strLabelConverter(object):
def __init__(self, alphabet, ignore_case=False):
self._ignore_case = ignore_case
if self._ignore_case:
alphabet = alphabet.lower()
self.alphabet = alphabet + '_' # for `-1` index
self.dict = {}
for i, char in enumerate(alphabet):
# NOTE: 0 is reserved for 'blank' required by wrap_ctc
self.dict[char] = i + 1
# print(self.dict)
def encode(self, text):
length = []
result = []
for item in text:
item = item.decode('utf-8', 'strict')
length.append(len(item))
for char in item:
if char not in self.dict.keys():
index = 0
else:
index = self.dict[char]
result.append(index)
text = result
return (torch.IntTensor(text), torch.IntTensor(length))
def decode(self, t, length, raw=False):
if length.numel() == 1:
length = length[0]
assert t.numel() == length, "text with length: {} does not match declared length: {}".format(t.numel(),
length)
if raw:
return ''.join([self.alphabet[i - 1] for i in t])
else:
char_list = []
for i in range(length):
if t[i] != 0 and (not (i > 0 and t[i - 1] == t[i])):
char_list.append(self.alphabet[t[i] - 1])
return ''.join(char_list)
else:
# batch mode
assert t.numel() == length.sum(), "texts with length: {} does not match declared length: {}".format(
t.numel(), length.sum())
texts = []
index = 0
for i in range(length.numel()):
l = length[i]
texts.append(
self.decode(
t[index:index + l], torch.IntTensor([l]), raw=raw))
index += l
return texts
# recognize api
class PytorchOcr():
def __init__(self, model_path='checkpoints/CRNN-1010.pth'):
alphabet_unicode = config.alphabet_v2
self.alphabet = ''.join([chr(uni) for uni in alphabet_unicode])
# print(len(self.alphabet))
self.nclass = len(self.alphabet) + 1
self.model = CRNN(config.imgH, 1, self.nclass, 256)
self.cuda = False
if torch.cuda.is_available():
self.cuda = True
self.model.cuda()
self.model.load_state_dict({k.replace('module.', ''): v for k, v in torch.load(model_path).items()})
else:
# self.model = nn.DataParallel(self.model)
self.model.load_state_dict(torch.load(model_path, map_location='cpu'))
self.model.eval()
self.converter = strLabelConverter(self.alphabet)
def recognize(self, img):
h,w = img.shape[:2]
if len(img.shape) == 3:
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
image = Image.fromarray(img)
transformer = resizeNormalize((int(w/h*32), 32))
image = transformer(image)
image = image.view(1, *image.size())
image = Variable(image)
if self.cuda:
image = image.cuda()
preds = self.model(image)
_, preds = preds.max(2)
preds = preds.transpose(1, 0).contiguous().view(-1)
preds_size = Variable(torch.IntTensor([preds.size(0)]))
txt = self.converter.decode(preds.data, preds_size.data, raw=False).strip()
return txt
if __name__ == '__main__':
for i in range(29):
recognizer = PytorchOcr()
img_path = f'cropped/{i}.png'
img = cv2.imread(img_path)
h, w = img.shape[:2]
res = recognizer.recognize(img)
disp(img)
print(res)
###Output
_____no_output_____
###Markdown
Web Application
###Code
from google.colab import output
output.serve_kernel_port_as_window(8084)
###Output
_____no_output_____
###Markdown
click on link above 👆 to see output of code below 👇
###Code
!pip install fastapi
!pip install uvicorn
!pip install aiofiles
!pip install python-multipart
!python -m uvicorn main:app --reload --port 8084
###Output
[32mINFO[0m: Uvicorn running on [1mhttp://127.0.0.1:8084[0m (Press CTRL+C to quit)
[32mINFO[0m: Started reloader process [[36m[1m222[0m] using [36m[1mstatreload[0m
[32mINFO[0m: Started server process [[36m228[0m]
[32mINFO[0m: Waiting for application startup.
[32mINFO[0m: Application startup complete.
[32mINFO[0m: 127.0.0.1:33612 - "[1mGET / HTTP/1.1[0m" [32m200 OK[0m
[32mINFO[0m: 127.0.0.1:33620 - "[1mGET /favicon.ico HTTP/1.1[0m" [31m404 Not Found[0m
{0: [array([5.87700000e+03, 1.22841976e+02, 6.44221080e+03, 1.27197103e+02,
5.87578920e+03, 2.74420546e+02, 6.44100000e+03, 2.78775673e+02,
9.98573661e-01]), '[e]ated'], 1: [array([5.54100000e+03, 1.25891216e+02, 5.84977535e+03, 1.27390600e+02,
5.54022465e+03, 2.75220519e+02, 5.84900000e+03, 2.76719902e+02,
9.99737084e-01]), 'and'], 2: [array([1.90900000e+03, 1.26135912e+02, 2.72945643e+03, 1.28605507e+02,
1.90854357e+03, 2.74074506e+02, 2.72900000e+03, 2.76544102e+02,
9.83842313e-01]), 'Portalhas'], 3: [array([1.34851515e+03, 1.27910549e+02, 1.88100000e+03, 1.26278506e+02,
1.34900000e+03, 2.80159005e+02, 1.88148485e+03, 2.78526962e+02,
9.99181151e-01]), 'Online'], 4: [array([2.75536857e+03, 1.31813448e+02, 3.16100000e+03, 1.27537968e+02,
2.75700000e+03, 2.78962032e+02, 3.16263143e+03, 2.74686552e+02,
9.99672353e-01]), '0een'], 5: [array([6.46900000e+03, 1.27997368e+02, 7.33794339e+03, 1.33517594e+02,
6.46805661e+03, 2.73079475e+02, 7.33700000e+03, 2.78599701e+02,
9.82364178e-01]), 'SemviCes0'], 6: [array([3.18900000e+03, 1.25285312e+02, 5.51335531e+03, 1.30661968e+02,
3.18864469e+03, 2.77567091e+02, 5.51300000e+03, 2.82943747e+02,
9.84574914e-01]), 'desianed todellver Passport'], 7: [array([101. , 125.39440322, 874.04954094, 130.45308434,
99.95045906, 281.63202751, 873. , 286.69070864,
0.99953085]), 'Pass00rt'], 8: [array([9.01000000e+02, 1.30902689e+02, 1.32216554e+03, 1.34046439e+02,
8.99834465e+02, 2.79633561e+02, 1.32100000e+03, 2.82777310e+02,
9.99019980e-01]), 'Seve'], 9: [array([100.59638069, 407.35415063, 889. , 405.22733688,
101. , 553.17911381, 889.40361931, 551.05230005,
0.99075639]), 'Ctzens in'], 10: [array([3.97227643e+03, 4.04529826e+02, 4.56900000e+03, 4.01862387e+02,
3.97300000e+03, 5.60970922e+02, 4.56972357e+03, 5.58303483e+02,
9.99594927e-01]), 'TeiaDle'], 11: [array([9.17000000e+02, 4.03383973e+02, 1.57854648e+03, 4.09704997e+02,
9.15453521e+02, 5.60342415e+02, 1.57700000e+03, 5.66663439e+02,
9.81128514e-01]), 'atimejv'], 12: [array([3.04482219e+03, 4.12116323e+02, 3.94500000e+03, 4.11056585e+02,
3.04500000e+03, 5.59797933e+02, 3.94517781e+03, 5.58738195e+02,
9.81697619e-01]), 'aCCeSs0e.'], 13: [array([2.59651365e+03, 4.20932185e+02, 3.01700000e+03, 4.19449615e+02,
2.59700000e+03, 5.52310390e+02, 3.01748635e+03, 5.50827820e+02,
9.83639240e-01]), 'mOre'], 14: [array([4.59700000e+03, 4.20743262e+02, 5.22530554e+03, 4.22178494e+02,
4.59669446e+03, 5.50242541e+02, 5.22500000e+03, 5.51677773e+02,
9.84889984e-01]), 'manner'], 15: [array([1.60500000e+03, 4.06724940e+02, 2.56905976e+03, 4.07080135e+02,
1.60494024e+03, 5.65563907e+02, 2.56900000e+03, 5.65919102e+02,
9.98935163e-01]), 'tansparent.']}
[32mINFO[0m: 127.0.0.1:33638 - "[1mPOST /uploadfile/ HTTP/1.1[0m" [32m200 OK[0m
###Markdown
Let's load some sample dasets to test the code.
###Code
!bash ./datasets/download_sample_dataset.sh
###Output
Downloading...
From: https://drive.google.com/uc?id=1QZNgRojYpYBLzUQJntWAmw1QwQMh4H50
To: /tf/Dropbox (Partners HealthCare)/ubuntu/docker/repos/DeepStrain/datasets/sample_nifti_3D/patient101_frame14.nii.gz
100%|████████████████████████████████████████| 667k/667k [00:00<00:00, 7.39MB/s]
Downloading...
From: https://drive.google.com/uc?id=1zFJM_qQKwz85xiYpX3XBRqhL0SQwy-Iw
To: /tf/Dropbox (Partners HealthCare)/ubuntu/docker/repos/DeepStrain/datasets/sample_nifti_3D/patient101_frame01.nii.gz
100%|████████████████████████████████████████| 664k/664k [00:00<00:00, 3.10MB/s]
Downloading...
From: https://drive.google.com/uc?id=1FqTquCYhLD2-EKxmCR9A5zt5265AEPdQ
To: /tf/Dropbox (Partners HealthCare)/ubuntu/docker/repos/DeepStrain/datasets/sample_nifti_4D/patient101_4d.nii.gz
20.0MB [00:01, 17.2MB/s]
###Markdown
Load the pre-trained models that attached to the publication. This will download the cardiac segmentation and motion estimation trained parameters:
###Code
!bash ./pretrained_models/download_model.sh
###Output
Note: available models are carson_Jan2021, carmen_Jan2021
Downloading models ...
Downloading...
From: https://drive.google.com/uc?id=1rINpNPZ4_lT9XuFB6Q7gyna_L4O3AIY9
To: /tf/Dropbox (Partners HealthCare)/ubuntu/docker/repos/DeepStrain/pretrained_models/carson_Jan2021.h5
229MB [00:12, 18.6MB/s]
Downloading...
From: https://drive.google.com/uc?id=10eMGoYYa4xFdwFuiwC7bwVSJ6b-bx7Ni
To: /tf/Dropbox (Partners HealthCare)/ubuntu/docker/repos/DeepStrain/pretrained_models/carmen_Jan2021.h5
449MB [00:23, 18.9MB/s]
###Markdown
Test segmentation on 3D data in NIFTI format.
###Code
!bash ./scripts/test_segmentation.sh ./datasets/sample_nifti_3D NIFTI ./results/sample_nifti_3D
###Output
+ DATAROOT=./datasets/sample_nifti_3D
+ DATAFORMAT=NIFTI
+ RESULTS_DIR=./results/sample_nifti_3D
+ CARSON_PATH=../private_models/main_carson_model.h5
+ CARMEN_PATH=./pretrained_models/carmen_Jan2021.h5
+ PIPELINE=segmentation
+ CUDA_VISIBLE_DEVICES=
+ python ./test.py --dataroot ./datasets/sample_nifti_3D --dataformat NIFTI --results_dir ./results/sample_nifti_3D --pretrained_models_netS ../private_models/main_carson_model.h5 --pretrained_models_netME ./pretrained_models/carmen_Jan2021.h5 --pipeline segmentation
2021-02-14 18:02:27.114286: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-02-14 18:02:29.315607: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2021-02-14 18:02:29.340591: E tensorflow/stream_executor/cuda/cuda_driver.cc:314] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2021-02-14 18:02:29.340649: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: e990b504c5b4
2021-02-14 18:02:29.340666: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: e990b504c5b4
2021-02-14 18:02:29.340797: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: 450.102.4
2021-02-14 18:02:29.340843: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 450.102.4
2021-02-14 18:02:29.340860: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:310] kernel version seems to match DSO: 450.102.4
2021-02-14 18:02:29.341267: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-02-14 18:02:29.352981: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 1696155000 Hz
2021-02-14 18:02:29.353521: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4ad3bc0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-02-14 18:02:29.353563: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
/tf/Dropbox (Partners HealthCare)/ubuntu/docker/repos/DeepStrain/data/nifti_dataset.py:77: UserWarning: Affine in nifti might be set incorrectly. Setting to affine=affine*zooms
warnings.warn("Affine in nifti might be set incorrectly. Setting to affine=affine*zooms")
/tf/Dropbox (Partners HealthCare)/ubuntu/docker/repos/DeepStrain/data/nifti_dataset.py:77: UserWarning: Affine in nifti might be set incorrectly. Setting to affine=affine*zooms
warnings.warn("Affine in nifti might be set incorrectly. Setting to affine=affine*zooms")
###Markdown
Test segmentation on 4D (3D + time) data in NIFTI format.
###Code
!bash ./scripts/test_segmentation.sh ./datasets/sample_nifti_4D NIFTI ./results/sample_nifti_4D
###Output
+ DATAROOT=./datasets/sample_nifti_4D
+ DATAFORMAT=NIFTI
+ RESULTS_DIR=./results/sample_nifti_4D
+ CARSON_PATH=../private_models/main_carson_model.h5
+ CARMEN_PATH=./pretrained_models/carmen_Jan2021.h5
+ PIPELINE=segmentation
+ CUDA_VISIBLE_DEVICES=
+ python ./test.py --dataroot ./datasets/sample_nifti_4D --dataformat NIFTI --results_dir ./results/sample_nifti_4D --pretrained_models_netS ../private_models/main_carson_model.h5 --pretrained_models_netME ./pretrained_models/carmen_Jan2021.h5 --pipeline segmentation
2021-02-14 18:02:36.541501: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-02-14 18:02:38.748517: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2021-02-14 18:02:38.772471: E tensorflow/stream_executor/cuda/cuda_driver.cc:314] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2021-02-14 18:02:38.772523: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: e990b504c5b4
2021-02-14 18:02:38.772544: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: e990b504c5b4
2021-02-14 18:02:38.772684: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: 450.102.4
2021-02-14 18:02:38.772733: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 450.102.4
2021-02-14 18:02:38.772749: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:310] kernel version seems to match DSO: 450.102.4
2021-02-14 18:02:38.773126: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-02-14 18:02:38.784659: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 1696155000 Hz
2021-02-14 18:02:38.785149: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x6260db0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-02-14 18:02:38.785202: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
/tf/Dropbox (Partners HealthCare)/ubuntu/docker/repos/DeepStrain/data/nifti_dataset.py:77: UserWarning: Affine in nifti might be set incorrectly. Setting to affine=affine*zooms
warnings.warn("Affine in nifti might be set incorrectly. Setting to affine=affine*zooms")
###Markdown
Test motion on 4D (3D + time) data in NIFTI format. Motion is only avilable for 4D data.
###Code
!bash ./scripts/test_motion.sh ./datasets/sample_nifti_4D NIFTI ./results/sample_nifti_4D
###Output
+ DATAROOT=./datasets/sample_nifti_4D
+ DATAFORMAT=NIFTI
+ RESULTS_DIR=./results/sample_nifti_4D
+ CARSON_PATH=./pretrained_models/carson_Jan2021.h5
+ CARMEN_PATH=./pretrained_models/carmen_Jan2021.h5
+ PIPELINE=motion
+ CUDA_VISIBLE_DEVICES=
+ python ./test.py --dataroot ./datasets/sample_nifti_4D --dataformat NIFTI --results_dir ./results/sample_nifti_4D --pretrained_models_netS ./pretrained_models/carson_Jan2021.h5 --pretrained_models_netME ./pretrained_models/carmen_Jan2021.h5 --pipeline motion
2021-02-14 18:03:25.301883: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-02-14 18:03:27.512196: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2021-02-14 18:03:27.536612: E tensorflow/stream_executor/cuda/cuda_driver.cc:314] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2021-02-14 18:03:27.536661: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: e990b504c5b4
2021-02-14 18:03:27.536678: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: e990b504c5b4
2021-02-14 18:03:27.536807: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: 450.102.4
2021-02-14 18:03:27.536855: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 450.102.4
2021-02-14 18:03:27.536872: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:310] kernel version seems to match DSO: 450.102.4
2021-02-14 18:03:27.537213: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-02-14 18:03:27.549271: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 1696155000 Hz
2021-02-14 18:03:27.549656: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4af9c90 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-02-14 18:03:27.549686: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
/tf/Dropbox (Partners HealthCare)/ubuntu/docker/repos/DeepStrain/data/nifti_dataset.py:77: UserWarning: Affine in nifti might be set incorrectly. Setting to affine=affine*zooms
warnings.warn("Affine in nifti might be set incorrectly. Setting to affine=affine*zooms")
2021-02-14 18:03:38.911740: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 1946157056 exceeds 10% of free system memory.
2021-02-14 18:03:39.743555: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 1946157056 exceeds 10% of free system memory.
2021-02-14 18:03:40.195599: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 1946157056 exceeds 10% of free system memory.
2021-02-14 18:03:40.499049: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 1946157056 exceeds 10% of free system memory.
2021-02-14 18:03:41.084644: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 1946157056 exceeds 10% of free system memory.
###Markdown
Test both segmentation and motion on 4D niftis.
###Code
!bash ./scripts/test_segmentation_motion.sh ./datasets/sample_nifti_4D NIFTI ./results/sample_nifti_4D
###Output
+ DATAROOT=./datasets/sample_nifti_4D
+ DATAFORMAT=NIFTI
+ RESULTS_DIR=./results/sample_nifti_4D
+ CARSON_PATH=./pretrained_models/carson_Jan2021.h5
+ CARMEN_PATH=./pretrained_models/carmen_Jan2021.h5
+ PIPELINE=segmentation_motion
+ CUDA_VISIBLE_DEVICES=
+ python ./test.py --dataroot ./datasets/sample_nifti_4D --dataformat NIFTI --results_dir ./results/sample_nifti_4D --pretrained_models_netS ./pretrained_models/carson_Jan2021.h5 --pretrained_models_netME ./pretrained_models/carmen_Jan2021.h5 --pipeline segmentation_motion
2021-02-14 18:04:19.796322: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-02-14 18:04:21.998095: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2021-02-14 18:04:22.024630: E tensorflow/stream_executor/cuda/cuda_driver.cc:314] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2021-02-14 18:04:22.024683: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: e990b504c5b4
2021-02-14 18:04:22.024704: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: e990b504c5b4
2021-02-14 18:04:22.024818: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: 450.102.4
2021-02-14 18:04:22.024863: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 450.102.4
2021-02-14 18:04:22.024878: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:310] kernel version seems to match DSO: 450.102.4
2021-02-14 18:04:22.025239: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-02-14 18:04:22.036833: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 1696155000 Hz
2021-02-14 18:04:22.037295: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5482b60 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-02-14 18:04:22.037323: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
/tf/Dropbox (Partners HealthCare)/ubuntu/docker/repos/DeepStrain/data/nifti_dataset.py:77: UserWarning: Affine in nifti might be set incorrectly. Setting to affine=affine*zooms
warnings.warn("Affine in nifti might be set incorrectly. Setting to affine=affine*zooms")
/tf/Dropbox (Partners HealthCare)/ubuntu/docker/repos/DeepStrain/data/nifti_dataset.py:77: UserWarning: Affine in nifti might be set incorrectly. Setting to affine=affine*zooms
warnings.warn("Affine in nifti might be set incorrectly. Setting to affine=affine*zooms")
2021-02-14 18:05:17.144698: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 1946157056 exceeds 10% of free system memory.
2021-02-14 18:05:17.945462: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 1946157056 exceeds 10% of free system memory.
2021-02-14 18:05:18.392550: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 1946157056 exceeds 10% of free system memory.
2021-02-14 18:05:18.698553: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 1946157056 exceeds 10% of free system memory.
2021-02-14 18:05:19.246800: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 1946157056 exceeds 10% of free system memory.
###Markdown
After the segmentations and motion estimates have been generated, we can use both calculate myocardial strain. Note that we're passing the output folder from the previous runs.
###Code
!bash ./scripts/test_strain.sh ./results/sample_nifti_4D
###Output
+ RESULTS_DIR=./results/sample_nifti_4D
+ PIPELINE=strain
+ CUDA_VISIBLE_DEVICES=
+ python ./test.py --dataroot ./results/sample_nifti_4D --results_dir ./results/sample_nifti_4D --pipeline strain
2021-02-14 18:05:58.124863: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
0.0 0.0
-0.0028643845930383114 1.082031278527137e-05
0.050226276271155106 -0.08250821851016393
0.0565662083407661 -0.08721526046380865
0.06341052114161663 -0.09039004858265032
0.062187569151276205 -0.09092932158690682
0.06273986208445685 -0.09008545707566279
0.06704142900754055 -0.09048872862980738
0.06900019280843682 -0.09202953300260222
0.06752667505148324 -0.09357088160323081
0.062176541303209334 -0.09396584141478774
0.05509482533956014 -0.09227094988109329
-0.011429068055575722 -0.006970679116830844
0.04503282824559175 -0.08830198004612085
0.03627042423655744 -0.08262235528522073
0.029786156419924492 -0.07606680680357371
0.026712081007800637 -0.06975290381524307
0.019263847211321777 -0.06391941641474
0.007899134163335668 -0.055720237241491964
-0.005480475496803074 -0.03917932457221275
-0.00592674874319504 -0.010661710877623127
-0.0024683947363135297 -0.0014360990600069954
-0.0021524594363897293 5.030914597078318e-05
-0.026725685733460826 -0.02099679369525766
-0.02529592552751031 -0.03299636940750714
-0.010376969736070649 -0.04378397382622917
0.002002523134231003 -0.05443799860918108
0.015893347546116793 -0.06210413845861033
0.02648765196612738 -0.06909838143488618
0.03835757750338634 -0.0751729741905356
###Markdown
User Demo
###Code
url = "http://127.0.0.1:5000"
filepath = 'C:\\Users\\reonh\Documents\\NUS\AY2022_S1\Capstone\capstone_21\python_backend\database\lpdlprnet\plate.jpg'
folderpath = 'C:\\Users\\reonh\Documents\\NUS\AY2022_S1\Capstone\capstone_21\python_backend\database\lpdlprnet\\'
filename = 'plate.jpg'
###Output
_____no_output_____
###Markdown
Check Server Status
###Code
import requests
response = requests.get( url + "/api/lpdlprnet/" + 'internal')
print(response.json(), flush=True)
###Output
{'HTTPStatus': 200, 'status': 'Active'}
###Markdown
Scenario: Developer needs to recognise license plates for the following images Get Predictions
###Code
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
files = [folderpath + 'plate.jpg', folderpath + 'plate_2.jpg']
def process(filename: str=None):
"""
View multiple images stored in files, stacking vertically
Arguments:
filename: str - path to filename containing image
"""
image = mpimg.imread(filename)
plt.figure()
plt.imshow(image)
for file in files:
print(file)
process(file)
import requests
baseURL = url
for file in files:
filename = file
filepath = file
request_files=[ ('image',(filename,open(filepath,'rb'),'image/jpeg')) ]
headers = {}
response = requests.post( baseURL + "/api/lpdlprnet/internal", headers=headers, data=payload, files=request_files)
print(response.json()['0']['0_lpr']['license_plate'])
###Output
3SAM123
FE002CA
###Markdown
Can we explain this output?
###Code
import requests
baseURL = url
filename = filename
filepath = filepath
files=[ ('image',(filename,open(filepath,'rb'),'image/jpeg')) ]
headers = {}
response = requests.post( baseURL + "/api/lpdlprnet/explain/internal", headers=headers, data=payload, files=files)
from IPython.display import Markdown, display
display(Markdown(response.json()['explain_markdown']))
###Output
_____no_output_____
###Markdown
How to write this code?
###Code
import requests
baseURL = url
files=[ ('image',(filename,open(filepath,'rb'),'image/jpeg')) ]
headers = {}
response = requests.post( baseURL + "/api/lpdlprnet/internal", headers=headers, data=payload, files=files)
response.json()
###Output
_____no_output_____
###Markdown
MeshCat Python
###Code
import numpy as np
import os
import time
import meshcat
import meshcat.geometry as g
import meshcat.transformations as tf
# Create a new visualizer
vis = meshcat.Visualizer()
###Output
You can open the visualizer by visiting the following URL:
http://127.0.0.1:7001/static/
###Markdown
By default, creating the `Visualizer` will start up a meshcat server for you in the background. The easiest way to open the visualizer is with its ``open`` method:
###Code
vis.open()
###Output
_____no_output_____
###Markdown
If ``vis.open()`` does not work for you, you can also point your browser to the server's URL:
###Code
vis.url()
f= '/Users/dipinoch/Documents/GitHub/meshcat-python/xyz.out'
data= np.loadtxt(f)
vertices= []
for p in data:
vertices.append(p)
# vis.set_object(g.Points(
# g.PointsGeometry(vertices, color=vertices),
# g.PointsMaterial()
# ))
# verts = np.random.random((3, 100000)).astype(np.float32)
verts= np.transpose(data)
print(verts.shape)
vis = meshcat.Visualizer().open()
vis.set_object(g.Points(
g.PointsGeometry(verts, color=verts),
g.PointsMaterial()
))
#imports
import cv2
import imagezmq
import numpy as np
import numpy as np
import os
import time
import meshcat
import meshcat.geometry as g
import meshcat.transformations as tf
from meshcat.animation import Animation
image_hub = imagezmq.ImageHub()
vis = meshcat.Visualizer().open()
while True: # show streamed images until Ctrl-C
anim = Animation()
c=0
with anim.at_frame(vis, 0) as frame:
c=c+1
with anim.at_frame(vis, c) as frame:
# `set_animation` actually sends the animation to the
# viewer. By default, the viewer will play the animation
# right away. To avoid that, you can also pass `play=false`.
vis.set_animation(anim)
cam_name, data = image_hub.recv_image()
image_hub.send_reply(b'OK')
verts= np.transpose(data)
vis.set_object(g.Points(
g.PointsGeometry(verts, color=verts),
g.PointsMaterial()
))
###Output
You can open the visualizer by visiting the following URL:
http://127.0.0.1:7001/static/
###Markdown
To create a 3D object, we use the `set_object` method:
###Code
for p in data
vis.set_object(g.Box([0.2, 0.2, 0.2]))
###Output
_____no_output_____
###Markdown
And to move that object around, we use `set_transform`:
###Code
for theta in np.linspace(0, 2 * np.pi, 200):
vis.set_transform(tf.rotation_matrix(theta, [0, 0, 1]))
time.sleep(0.005)
###Output
_____no_output_____
###Markdown
MeshCat also supports embedding a 3D view inside a Jupyter notebook cell:
###Code
vis.jupyter_cell()
###Output
_____no_output_____
###Markdown
Notice how the 3D scene displayed in the Jupyter cell matches the one in the external window. The meshcat server process remembers the objects and transforms you've sent, so opening a new browser pointing to the same URL should give you the same scene. Calling `set_object` again will replace the existing Box:
###Code
vis.set_object(g.Box([0.1, 0.1, 0.2]))
###Output
_____no_output_____
###Markdown
We can also delete the box:
###Code
vis.delete()
###Output
_____no_output_____
###Markdown
The Scene TreeObviously, we will often want to draw more than one object. So how do we do that? The fundamental idea of MeshCat is that it gives direct access to the *scene graph*. You can think of the scene as a tree of objects, and we name each object in the tree by its *path* from the root of the tree. Children in the tree inherit the transformations applied to their parents. So, for example, we might have a `robot` at the path `/robot`, and that robot might have a child called `head` at the path `/robot/head`. Each path in the tree can have a different geometry associated.First, let's create the robot. We access paths in the tree by indexing into the Visualizer:
###Code
vis["robot"].set_object(g.Box([0.15, 0.35, 0.4]))
###Output
_____no_output_____
###Markdown
Now let's give the robot a head:
###Code
vis["robot"]["head"].set_object(g.Box([0.2, 0.2, 0.2]))
vis["robot"]["head"].set_transform(tf.translation_matrix([0, 0, 0.32]))
###Output
_____no_output_____
###Markdown
We can move the entire robot by setting the transform of the `/robot` path:
###Code
for x in np.linspace(0, np.pi, 100):
vis["robot"].set_transform(tf.translation_matrix([np.sin(x), 0, 0]))
time.sleep(0.01)
###Output
_____no_output_____
###Markdown
And we can move just the head by setting the transform of `/robot/head`:
###Code
for x in np.linspace(0, 2 * np.pi, 100):
# vis["robot/head"] is a shorthand for vis["robot"]["head"]
vis["robot/head"].set_transform(
tf.translation_matrix([0, 0, 0.32]).dot(
tf.rotation_matrix(x, [0, 0, 1])))
time.sleep(0.01)
###Output
_____no_output_____
###Markdown
We can delete the head...
###Code
vis["robot/head"].delete()
###Output
_____no_output_____
###Markdown
...or the entire robot:
###Code
vis["robot"].delete()
###Output
_____no_output_____
###Markdown
Other GeometriesMeshCat supports several geometric primitives as well as meshes (represented by `.obj`, `.dae`, or `.stl` files). You can also specify a material to describe the object's color, reflectivity, or texture:
###Code
vis["sphere"].set_object(g.Sphere(0.1),
g.MeshLambertMaterial(
color=0xff22dd,
reflectivity=0.8))
vis["sphere"].delete()
###Output
_____no_output_____
###Markdown
MeshCat can load `.obj`, `.dae`, and `.stl` meshes via the `ObjMeshGeometry`, `DaeMeshGeometry`, and `StlMeshGeometry` types respectively:
###Code
vis["robots/valkyrie/head"].set_object(
g.ObjMeshGeometry.from_file(
os.path.join(meshcat.viewer_assets_path(), "data/head_multisense.obj")),
g.MeshLambertMaterial(
map=g.ImageTexture(
image=g.PngImage.from_file(
os.path.join(meshcat.viewer_assets_path(), "data/HeadTextureMultisense.png"))
)
)
)
###Output
_____no_output_____
###Markdown
The `PointCloud()` function is a helper to create a `Points` object with a `PointsGeometry` and `PointsMaterial`:
###Code
verts = np.random.rand(3, 100000)
vis["perception/pointclouds/random"].set_object(
g.PointCloud(position=verts, color=verts))
vis["perception/pointclouds/random"].set_transform(
tf.translation_matrix([0, 1, 0]))
vis["robots"].delete()
vis["perception"].delete()
###Output
_____no_output_____
###Markdown
Cart-PoleHere's a simple example of visualizing a mechanism:
###Code
cart_pole = vis["cart_pole"]
cart_pole.delete()
cart = cart_pole["cart"]
pivot = cart["pivot"]
pole = pivot["pole"]
cart.set_object(g.Box([0.5, 0.3, 0.2]))
pole.set_object(g.Box([1, 0.05, 0.05]))
pole.set_transform(tf.translation_matrix([0.5, 0, 0]))
pivot.set_transform(tf.rotation_matrix(-np.pi/2, [0, 1, 0]))
for x in np.linspace(-np.pi, np.pi, 200):
cart.set_transform(tf.translation_matrix([np.sin(x), 0, 0]))
pivot.set_transform(tf.rotation_matrix(x / 4 - np.pi / 2, [0, 1, 0]))
time.sleep(0.01)
###Output
_____no_output_____
###Markdown
WARNING:"fitting_parameters.h5" need to be in the directory you are working onor there will be an error for importing mr_forecast in the next cell.If you don't want the file in this directory,change the mr_forecast.py line 16hyper_file = 'fitting_parameters.h5' ->hyper_file = [directory of fitting parameter file]+'fitting_parameters.h5'
###Code
import numpy as np
import mr_forecast as mr
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
================================predict the mean and std of radius given those of the mass
###Code
Rmedian, Rplus, Rminus = mr.Mstat2R(mean=1.0, std=0.1, unit='Earth', sample_size=100, classify='Yes')
print 'R = %.2f (+ %.2f - %.2f) REarth' % (Rmedian, Rplus, Rminus)
###Output
R = 1.00 (+ 0.12 - 0.10) REarth
###Markdown
================================predict a vector of radius given a vector of mass
###Code
M1 = np.loadtxt('demo_mass.dat')
R1 = mr.Mpost2R(M1, unit='Earth', classify='Yes')
plt.plot(np.log10(M1), np.log10(R1), 'bx')
plt.xlabel(r'$log_{10}\ M/M_{\oplus}$')
plt.ylabel(r'$log_{10}\ R/R_{\oplus}$')
plt.show()
###Output
_____no_output_____
###Markdown
================================predict the mean and std of mass given those of the radius
###Code
Mmedian, Mplus, Mminus = mr.Rstat2M(mean=0.1, std=0.01, unit='Jupiter', sample_size=100, grid_size=1e3, classify='Yes')
print 'M = %.3f (+ %.3f - %.3f) MEarth' % (Mmedian, Mplus, Mminus)
###Output
M = 0.005 (+ 0.004 - 0.002) MEarth
###Markdown
================================predict a vector of mass given a vector of radius
###Code
R2 = np.loadtxt('demo_radius.dat')
M2 = mr.Rpost2M(R2, unit='Earth', grid_size=1e3, classify='Yes')
plt.hist(np.log10(M2))
plt.xlabel(r'$log_{10}\ M/M_{\odot}$')
plt.show()
###Output
_____no_output_____
###Markdown
You'll need to download pretrained model from [google dirve](https://drive.google.com/open?id=1cQ27LIn-Rig4-Uayzy_gH5-cW-NRGVzY) 1. model converted from chainer
###Code
# in this machine the cupy isn't install correctly...
# so it's a little slow
#trainer.load('/home/mahaviratcingularity/chainer_best_model_converted_to_pytorch_0.7053.pth')
trainer.load('/home/mahaviratcingularity/simple-faster-rcnn-pytorch-lablp/models/fasterrcnn_withlp.pth')
opt.caffe_pretrain=False # this model was trained from caffe-pretrained model
X = numpy.array(_labels)
#print (X[0,1])
i=0
for i in range (X.shape[1]):
#print (X[0,i])
print (LABEL_NAMES[X[0,i]])
_bboxes, _labels, _scores = trainer.faster_rcnn.predict(img,visualize=True)
vis_bbox(at.tonumpy(img[0]),
at.tonumpy(_bboxes[0]),
at.tonumpy(_labels[0]).reshape(-1),
at.tonumpy(_scores[0]).reshape(-1))
print ( _labels)
print (_scores)
print (_bboxes)
# it failed to find the dog, but if you set threshold from 0.7 to 0.6, you'll find it
###Output
[array([ 0, 10, 10, 12, 26, 29, 29, 30, 31, 33, 35, 36], dtype=int32)]
[array([0.95023704, 0.9499536 , 0.92274314, 0.9227969 , 0.96232533,
0.9769573 , 0.38931707, 0.94966257, 0.9637182 , 0.9325139 ,
0.36539766, 0.9433534 ], dtype=float32)]
[array([[ 874.90045, 566.86725, 1099.3694 , 706.8316 ],
[ 842.2266 , 484.73816, 1053.898 , 605.766 ],
[1081.174 , 1219.2161 , 1296.1874 , 1360.9817 ],
[1029.2302 , 1061.2607 , 1270.2358 , 1224.0825 ],
[ 948.4397 , 814.7312 , 1142.9932 , 938.14 ],
[1199.3596 , 1620.3827 , 1429.0635 , 1760.4459 ],
[1198.3215 , 1703.0204 , 1441.9355 , 1817.0106 ],
[1008.1824 , 940.32764, 1189.1702 , 1049.8595 ],
[1229.773 , 1882.8909 , 1464.6309 , 2041.2297 ],
[1236.569 , 1735.7858 , 1471.7858 , 1865.8185 ],
[ 959.94354, 510.98773, 1449.2808 , 2140.598 ],
[1146.8861 , 1494.2152 , 1379.2623 , 1653.2157 ]], dtype=float32)]
###Markdown
2. model trained with torchvision pretrained model
###Code
trainer.load('/home/cy/fasterrcnn_12211511_0.701052458187_torchvision_pretrain.pth')
opt.caffe_pretrain=False # this model was trained from torchvision-pretrained model
_bboxes, _labels, _scores = trainer.faster_rcnn.predict(img,visualize=True)
vis_bbox(at.tonumpy(img[0]),
at.tonumpy(_bboxes[0]),
at.tonumpy(_labels[0]).reshape(-1),
at.tonumpy(_scores[0]).reshape(-1))
# it failed to find the dog, but if you set threshold from 0.7 to 0.6, you'll find it
###Output
_____no_output_____
###Markdown
3. model trained with caffe pretrained model
###Code
trainer.load('/home/cy/fasterrcnn_12222105_0.712649824453_caffe_pretrain.pth')
opt.caffe_pretrain=True # this model was trained from caffe-pretrained model
_bboxes, _labels, _scores = trainer.faster_rcnn.predict(img,visualize=True)
vis_bbox(at.tonumpy(img[0]),
at.tonumpy(_bboxes[0]),
at.tonumpy(_labels[0]).reshape(-1),
at.tonumpy(_scores[0]).reshape(-1))
###Output
_____no_output_____
###Markdown
Linear Regression
###Code
data = pd.read_csv('data/salary_data.csv')
X = data.iloc[:, :-1].values
y = data.iloc[:, -1].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
lr = LinearRegression()
lr.fit(X_train, y_train)
pred = lr.predict(X_test)
Metrics(pred, y_test, 'Linear Regression')
plt.figure(figsize=(8, 6))
plt.plot(y_test, 'r-', label='Ground Truth')
plt.plot(pred, 'b-', label='Prediction')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
PCA
###Code
data = pd.read_csv('data/diabetes.csv')
data.head()
X = data.iloc[:, :-1].values
y = data.iloc[:, -1].values
pca = PCA(n_components=2)
scale = StandardScaler()
X_scale = scale.fit_transform(X)
pca.fit(X_scale)
X_transform_scratch = pca.transform(X_scale)
plt.figure(figsize=(6, 6))
plt.scatter(X_transform_scratch[:,0], X_transform_scratch[:, 1], c ='red')
plt.plot()
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
scale = StandardScaler()
X_scale = scale.fit_transform(X)
pca.fit(X_scale)
X_transform = pca.transform(X_scale)
plt.figure(figsize=(6, 6))
plt.scatter(X_transform[:,0], X_transform[:, 1])
plt.plot()
plt.figure(figsize=(8, 8))
plt.scatter(X_transform[:, 0], X_transform[:, 1], c='g', marker='^')
plt.scatter(X_transform_scratch[:, 0], X_transform_scratch[:, 1], c='r', marker='o')
plt.show()
plt.figure(figsize=(8, 8))
plt.scatter(X_transform[:, 0], X_transform[:, 1], c='g', marker='^')
plt.scatter(X_transform_scratch[:, 0], -X_transform_scratch[:, 1], c='r', marker='o')
plt.show()
###Output
_____no_output_____
###Markdown
KNN Regression
###Code
data = pd.read_csv('data/salary_data.csv')
X = data.iloc[:, :-1].values
y = data.iloc[:, -1].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
knn_regressor = KNeighborsRegression(n_neighbors=4, p=2)
knn_regressor.fit(X_train, y_train)
knn_predict = knn_regressor.predict(X_test)
Metrics(knn_predict, y_test, 'KNN Regression')
plt.figure(figsize=(8, 6))
plt.plot(y_test, 'r-', label='Ground Truth')
plt.plot(knn_predict, 'b-', label='Prediction')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import pandas as pd
import numpy as np
import datetime as dt
# Demean and rescale numerical columns
from sklearn.preprocessing import StandardScaler
# Package which performs all required encoding of dataset variables for data science projects
# - Fills in missing values
# - Encodes numeric variables (demeans and scales to unit variance, unless specified)
# - Encodes nominal variables (one-hot encodes)
# - Encodes timestamp variables (generates a set of cyclical features)
# - Is robust to intended boolean features being read in as strings or ints
# - Stores important details of train set encodings (means, variances, categories) for use in transforming
# test set
from hermes_ml.dataset_conditioner import FullEncoder
###Output
_____no_output_____
###Markdown
Load in example dataset
###Code
# Load in example train set dataframe
df = pd.read_csv(filepath_or_buffer='demo-dataset/dataset.csv', index_col=0, parse_dates=True)
# Load in example test set dataframe
df_test = pd.read_csv(filepath_or_buffer='demo-dataset/dataset_test.csv', index_col=0, parse_dates=True)
###Output
_____no_output_____
###Markdown
Temporary - convert intended datetime columns (currently strings) to datetime `pandas.read_csv` is reading timestamp features in as strings (doesn't seem to be a problem with SQLAlchemy/Redshift)In the future, this should be rolled into the `timestamp` encoder to make it more robust.
###Code
datetime_cols = ['datetimes_1', 'datetimes_2']
for datetime_col in datetime_cols:
df[datetime_col] = pd.to_datetime(df[datetime_col])
df_test[datetime_col] = pd.to_datetime(df_test[datetime_col])
###Output
_____no_output_____
###Markdown
Inspect dataset
###Code
df.head(5)
df_test.head(5)
###Output
_____no_output_____
###Markdown
Specify input lookup tableThe hermes-ml `FullEncoder` takes a lookup table specifying {`feature`, `dtype`, `missing value fill method`} for each feature
###Code
useful_cols = pd.DataFrame(
data=[
['datetimes_1', 'timestamp', 'skip'],
['datetimes_2', 'timestamp', 'skip'],
['numeric_1', 'numeric', 'mean'],
['numeric_2', 'numeric', 'mean'],
['numeric_3', 'numeric', 'zeros'],
['boolean_like_1', 'bool', 'skip'],
['boolean_like_2', 'bool', 'skip'],
['boolean_like_3', 'bool', 'skip'],
['boolean_like_4', 'bool', 'skip'],
['boolean', 'bool', 'skip'],
['nominal', 'nominal', 'skip'],
['ordinal_1', 'ordinal', 'skip'],
],
columns=[
'feature',
'dtype',
'fillna',
]
)
###Output
_____no_output_____
###Markdown
Visualise the resulting lookup table
###Code
useful_cols
###Output
_____no_output_____
###Markdown
Encoder - train set Run the `kangchenjunga.fit_transform` method on the train set `df` to encode features and store means, variances, categorical columns etc for future use on the test set
###Code
# Instantiate the encoder object
enc = FullEncoder()
# Fit encoder on training set and transform it
features_encoded = enc.fit_transform(df, useful_cols)
###Output
Filling in missing values...
Missing values filled
Encoding numeric features...
Numeric features encoded
Encoding nominal features...
Nominal features encoded
Encoding timestamp features...
###Markdown
Have a look at the resulting encoded dataframe
###Code
enc.means_
features_encoded.head(5)
###Output
_____no_output_____
###Markdown
Encoder - test set Run the `kangchenjunga.transform` method on the test set `df_test` to encode features using the means, variances, categorical columns etc generated on the train set
###Code
# Transform test set using encoding attributes learnt on the train set (means, variances, categories)
features_encoded_test = enc.transform(df_test, useful_cols)
features_encoded_test.head(5)
###Output
_____no_output_____
###Markdown
Save/load encoder to file
###Code
enc.save_encoder('demo_encoding')
###Output
_____no_output_____
###Markdown
Previous dataset encodings can be loaded from file
###Code
from hermes_ml.dataset_conditioner import load_encoder
enc_copy = load_encoder('demo_encoding')
features_encoded_test_after_reload = enc_copy.transform(df_test, useful_cols)
features_encoded_test.head(3)
features_encoded_test_after_reload.head(3)
###Output
_____no_output_____
###Markdown
IntroductionThis notebook shows how to use variance constrained semi grand canonical (VC-SGC) Molecular Dynamics/Monte Carlo (MD/MC) calculations [1]. This approach has been implemented in Lammps [2] and we have made bindings to it inside pyiron for easy use. Here, we show a simple example similar to that used in one of our publications [3], which investigates segregation of Mg to a $\Sigma 5$ tilt grain boundary in Al.[1] B. Sadigh, P. Erhart, A. Stukowski, A. Caro, E. Martinez, and L. Zepeda-Ruiz, Phys. Rev. B 85, 184203 (2012).[2] https://vcsgc-lammps.materialsmodeling.org.[3] Huan Zhao, Liam Huber, et al., Phys. Rev. Lett. 124, 106102 (2020). SetupImports and so forth.
###Code
from pyiron_atomistics import Project
from pyiron_atomistics.vasp.structure import read_atoms
from os.path import join as pjoin
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from sklearn.cluster import DBSCAN
pr = Project('mdmc_demo')
pr.remove_jobs_silently(recursive=True)
###Output
_____no_output_____
###Markdown
Run simulationsHere we actually run the calculations. First, by relaxing the relevant GB at 0K, then by running VC-SGC MD/MC calculations at high and low temperatures.The parallelization scheme for VC-SGC means there are lower limits on the structure size we can calculate for. Thus, even using a relatively short run of 20 ps of MD with 500 Monte Carlo phases this calculation takes about ten minutes. Once it's executing, go grab a coffee.
###Code
host = 'Al'
solute = 'Mg'
potential = '2009--Mendelev-M-I--Al-Mg--LAMMPS--ipr1'
lattice_guess = pr.create_ase_bulk(host, cubic=True).cell[0, 0]
ideal_gb_struct = read_atoms('S5_s001_n210_r1011', species_list=[host, host])
ideal_gb_struct.cell *= lattice_guess
ideal_gb_struct.positions *= lattice_guess
relax_gb_job = pr.create_job(pr.job_type.Lammps, 'relax_gb')
relax_gb_job.potential = potential
relax_gb_job.structure = ideal_gb_struct
relax_gb_job.calc_minimize(f_tol=0.001, pressure=0)
relax_gb_job.run()
gb_struct = relax_gb_job.get_structure().copy()
gb_struct.plot3d();
mdmc_job = pr.create_job(pr.job_type.Lammps, 'mdmc')
mdmc_job.potential = potential
mdmc_job.structure = gb_struct.repeat([1, 4, 8])
mdmc_job.calc_vcsgc(
mu={'Al':0, 'Mg':-2},
target_concentration={'Al':0.9, 'Mg':0.1},
temperature=300,
pressure=0.0,
n_ionic_steps=10000,
mc_step_interval=20,
time_step=2.0,
langevin=True
)
mdmc_job.run()
###Output
The job mdmc was saved and received the ID: 2
###Markdown
Plotting functionsJust leave this collapsed unless you're really keen.
###Code
def plot_average_occupation(struct, eps=0.5, min_samples=5,
columnar_axis=2, max_in_col=9,
size=200, figsize=(30, 10), fontsize=35,
save_name=None, fmt='eps', index1_name='Mg', show_orphans=False):
"""
For a system with a nice columnar projection, given a VC-SGC job and its minimized final structure,
plots the mean occupation of each column (indicated by colour). Atoms which could not be grouped are
plotted as black.
`eps` should be tuned to have the minimum number of uncatagorized atoms without assigning more than
the expected number of atoms to a given column.
"""
from matplotlib.colors import ListedColormap
# Project onto the plane
axes = [0, 1, 2]
axes.pop(columnar_axis)
projected_pos = struct.positions[:, axes]
# Cluster by column
cluster_algo = DBSCAN(eps=eps, min_samples=min_samples)
cluster_algo.fit(projected_pos)
column_groups = cluster_algo.labels_
unique_groups = np.unique(column_groups)
unique_groups = unique_groups[unique_groups != -1] # Ignore the 'noisy' group
# Double check that two columns aren't getting lumped together
group_sizes = [len(column_groups[np.argwhere(column_groups == g).reshape(-1)]) for g in unique_groups]
for group_size in group_sizes:
if group_size > max_in_col:
print("WARNING: Group found with {} members.".format(group_size))
# Get the average positions
ungrouped_pos = projected_pos[np.argwhere(column_groups == -1).reshape(-1)]
column_pos = np.array([np.mean(projected_pos[np.argwhere(column_groups == group).reshape(-1)], axis=0)
for group in unique_groups])
# Get the average occupation
indices = struct.indices
column_concentration = np.array([np.mean(indices[np.argwhere(column_groups == group).reshape(-1)])
for group in unique_groups])
# Plot
units = 255
Al_color = np.array([153/units, 153/units, 153/units])
Mg_color = np.array([(0/units, 128/units, 255/units)])
mix_frac = np.linspace(0, 1, 1000)
cmap = ListedColormap([tuple(((1 - x) * Al_color + x * Mg_color)[0]) for x in mix_frac])
fig, ax = plt.subplots(figsize=figsize)
if show_orphans:
ax.scatter(ungrouped_pos[:, 0], ungrouped_pos[:, 1], s=size, color='r', marker='s', alpha=0.1)
cols = ax.scatter(column_pos[:, 0], column_pos[:, 1], c=column_concentration, s=size, cmap=cmap)
cbar = fig.colorbar(cols, orientation='horizontal')
ax.set_aspect('equal')
ax.tick_params(axis='both', which='major', labelsize=fontsize)
ax.set_xlabel('Distance $[\mathrm{\AA}]$', size=fontsize)
ax.set_ylabel('Distance $[\mathrm{\AA}]$', size=fontsize)
cbar.ax.set_xlabel('Columnar {} concentration'.format(index1_name), size=fontsize)
cbar.ax.tick_params(axis='both', which='major', labelsize=fontsize)
fig.tight_layout()
if save_name is not None:
plt.savefig(save_name + '.' + fmt, format=fmt)
###Output
_____no_output_____
###Markdown
VisualizationFinally, let's take a peek at the results.Early on, we see that although the Mg atoms are more prevalent near the boundary, they are still spread out somewhat uniformly through the system.By the end of the simulation, even with this truncated simulation time to account for the fact this is only a demo, the Mg atoms nearly perfectly occupy the planar GB sites, and you can begin to see the columnar checkerboard occupation appearing. Depending on your random seed, you may also see some structural changes at the GB.
###Code
plot_average_occupation(mdmc_job.get_structure(10))
plot_average_occupation(mdmc_job.get_structure(-1))
###Output
_____no_output_____
###Markdown
CleanupThis will be commented out to begin with, in case you want to probe the output a little more deeply. But feel free to uncomment and execute whenever you're done with the demo.
###Code
# pr.remove_jobs_silently(recursive=True)
# pr.remove(enable=True)
###Output
_____no_output_____ |
C to F Converter.ipynb | ###Markdown
PROBLEM STATEMENT - In this project, we will build a simple machine learning model to convert from celsius to fahrenheit. - The equation is as follows: **T(°F) = T(°C) × 9/5 + 32**- For Example, let's convert 0°C celsius temperature to Fahrenheit: **(0°C × 9/5) + 32 = 32°F** <img src="https://upload.wikimedia.org/wikipedia/commons/7/70/Thermometer_CF.svg" alt="Fashion MNIST sprite" width="600"> Figure 1. Convert Celsius to Fahrenheit [Image Source: https://commons.wikimedia.org/wiki/File:Thermometer_CF.svg] IMPORT LIBRARIES
###Code
import tensorflow as tf
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
IMPORT DATASETS
###Code
Temperature_df = pd.read_csv('Celsius to Fahrenheit.csv')
Temperature_df.reset_index(drop=True, inplace=True)
Temperature_df
Temperature_df.head(5)
Temperature_df.tail(10)
Temperature_df.info()
Temperature_df.describe()
###Output
_____no_output_____
###Markdown
VISUALIZE DATASET
###Code
sns.scatterplot(Temperature_df['Celsius'], Temperature_df['Fahrenheit'])
###Output
_____no_output_____
###Markdown
CREATE TESTING AND TRAINING DATASET
###Code
X_train = Temperature_df['Celsius']
y_train = Temperature_df['Fahrenheit']
###Output
_____no_output_____
###Markdown
BUILD AND TRAIN THE MODEL
###Code
X_train.shape
y_train.shape
###Output
_____no_output_____
###Markdown
This will model a simple linear equation.
###Code
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(units=1, input_shape=[1]))
model.summary()
model.compile(optimizer=tf.keras.optimizers.Adam(0.5), loss='mean_squared_error')
epochs_hist = model.fit(X_train, y_train, epochs = 100)
###Output
Epoch 1/100
1/1 [==============================] - 0s 971us/step - loss: 1123.2013
Epoch 2/100
1/1 [==============================] - 0s 2ms/step - loss: 985.2646
Epoch 3/100
1/1 [==============================] - 0s 998us/step - loss: 1013.1875
Epoch 4/100
1/1 [==============================] - 0s 996us/step - loss: 981.9695
Epoch 5/100
1/1 [==============================] - 0s 3ms/step - loss: 911.8337
Epoch 6/100
1/1 [==============================] - 0s 995us/step - loss: 863.8497
Epoch 7/100
1/1 [==============================] - 0s 2ms/step - loss: 849.8747
Epoch 8/100
1/1 [==============================] - 0s 1ms/step - loss: 838.9042
Epoch 9/100
1/1 [==============================] - 0s 998us/step - loss: 807.2853
Epoch 10/100
1/1 [==============================] - 0s 997us/step - loss: 763.1291
Epoch 11/100
1/1 [==============================] - 0s 2ms/step - loss: 725.4022
Epoch 12/100
1/1 [==============================] - 0s 2ms/step - loss: 702.7316
Epoch 13/100
1/1 [==============================] - 0s 998us/step - loss: 687.3940
Epoch 14/100
1/1 [==============================] - 0s 2ms/step - loss: 666.3483
Epoch 15/100
1/1 [==============================] - 0s 3ms/step - loss: 636.0659
Epoch 16/100
1/1 [==============================] - 0s 2ms/step - loss: 603.5819
Epoch 17/100
1/1 [==============================] - 0s 998us/step - loss: 577.2011
Epoch 18/100
1/1 [==============================] - 0s 2ms/step - loss: 558.3974
Epoch 19/100
1/1 [==============================] - 0s 998us/step - loss: 541.4453
Epoch 20/100
1/1 [==============================] - 0s 997us/step - loss: 520.4036
Epoch 21/100
1/1 [==============================] - 0s 2ms/step - loss: 495.2239
Epoch 22/100
1/1 [==============================] - 0s 3ms/step - loss: 470.6245
Epoch 23/100
1/1 [==============================] - 0s 2ms/step - loss: 450.5358
Epoch 24/100
1/1 [==============================] - 0s 999us/step - loss: 434.2827
Epoch 25/100
1/1 [==============================] - 0s 997us/step - loss: 417.9491
Epoch 26/100
1/1 [==============================] - 0s 997us/step - loss: 398.9818
Epoch 27/100
1/1 [==============================] - 0s 994us/step - loss: 378.6342
Epoch 28/100
1/1 [==============================] - 0s 3ms/step - loss: 359.9960
Epoch 29/100
1/1 [==============================] - 0s 1ms/step - loss: 344.4514
Epoch 30/100
1/1 [==============================] - 0s 996us/step - loss: 330.4468
Epoch 31/100
1/1 [==============================] - 0s 4ms/step - loss: 315.6053
Epoch 32/100
1/1 [==============================] - 0s 996us/step - loss: 299.4599
Epoch 33/100
1/1 [==============================] - 0s 2ms/step - loss: 283.6521
Epoch 34/100
1/1 [==============================] - 0s 1ms/step - loss: 269.7682
Epoch 35/100
1/1 [==============================] - 0s 998us/step - loss: 257.5607
Epoch 36/100
1/1 [==============================] - 0s 2ms/step - loss: 245.4903
Epoch 37/100
1/1 [==============================] - 0s 2ms/step - loss: 232.6832
Epoch 38/100
1/1 [==============================] - 0s 997us/step - loss: 219.8278
Epoch 39/100
1/1 [==============================] - 0s 3ms/step - loss: 208.1144
Epoch 40/100
1/1 [==============================] - 0s 2ms/step - loss: 197.7449
Epoch 41/100
1/1 [==============================] - 0s 1ms/step - loss: 187.8296
Epoch 42/100
1/1 [==============================] - 0s 3ms/step - loss: 177.6133
Epoch 43/100
1/1 [==============================] - 0s 3ms/step - loss: 167.3596
Epoch 44/100
1/1 [==============================] - 0s 996us/step - loss: 157.8542
Epoch 45/100
1/1 [==============================] - 0s 996us/step - loss: 149.3400
Epoch 46/100
1/1 [==============================] - 0s 2ms/step - loss: 141.2830
Epoch 47/100
1/1 [==============================] - 0s 2ms/step - loss: 133.1528
Epoch 48/100
1/1 [==============================] - 0s 2ms/step - loss: 125.0811
Epoch 49/100
1/1 [==============================] - 0s 997us/step - loss: 117.5744
Epoch 50/100
1/1 [==============================] - 0s 998us/step - loss: 110.7860
Epoch 51/100
1/1 [==============================] - 0s 3ms/step - loss: 104.3578
Epoch 52/100
1/1 [==============================] - 0s 997us/step - loss: 97.9568
Epoch 53/100
1/1 [==============================] - 0s 3ms/step - loss: 91.6984
Epoch 54/100
1/1 [==============================] - 0s 2ms/step - loss: 85.9058
Epoch 55/100
1/1 [==============================] - 0s 1ms/step - loss: 80.6268
Epoch 56/100
1/1 [==============================] - 0s 997us/step - loss: 75.5986
Epoch 57/100
1/1 [==============================] - 0s 3ms/step - loss: 70.6426
Epoch 58/100
1/1 [==============================] - 0s 996us/step - loss: 65.8824
Epoch 59/100
1/1 [==============================] - 0s 997us/step - loss: 61.5093
Epoch 60/100
1/1 [==============================] - 0s 3ms/step - loss: 57.4891
Epoch 61/100
1/1 [==============================] - 0s 2ms/step - loss: 53.6331
Epoch 62/100
1/1 [==============================] - 0s 992us/step - loss: 49.8772
Epoch 63/100
1/1 [==============================] - 0s 997us/step - loss: 46.3395
Epoch 64/100
1/1 [==============================] - 0s 998us/step - loss: 43.1070
Epoch 65/100
1/1 [==============================] - 0s 998us/step - loss: 40.1010
Epoch 66/100
1/1 [==============================] - 0s 2ms/step - loss: 37.2062
Epoch 67/100
1/1 [==============================] - 0s 968us/step - loss: 34.4335
Epoch 68/100
1/1 [==============================] - 0s 997us/step - loss: 31.8701
Epoch 69/100
1/1 [==============================] - 0s 2ms/step - loss: 29.5250
Epoch 70/100
1/1 [==============================] - 0s 999us/step - loss: 27.3161
Epoch 71/100
1/1 [==============================] - 0s 996us/step - loss: 25.1977
Epoch 72/100
1/1 [==============================] - 0s 998us/step - loss: 23.2121
Epoch 73/100
1/1 [==============================] - 0s 995us/step - loss: 21.3992
Epoch 74/100
1/1 [==============================] - 0s 2ms/step - loss: 19.7247
Epoch 75/100
1/1 [==============================] - 0s 2ms/step - loss: 18.1349
Epoch 76/100
1/1 [==============================] - 0s 2ms/step - loss: 16.6333
Epoch 77/100
1/1 [==============================] - 0s 1ms/step - loss: 15.2555
Epoch 78/100
1/1 [==============================] - 0s 1000us/step - loss: 13.9983
Epoch 79/100
1/1 [==============================] - 0s 2ms/step - loss: 12.8217
Epoch 80/100
1/1 [==============================] - 0s 1ms/step - loss: 11.7102
Epoch 81/100
1/1 [==============================] - 0s 3ms/step - loss: 10.6844
Epoch 82/100
1/1 [==============================] - 0s 998us/step - loss: 9.7537
Epoch 83/100
1/1 [==============================] - 0s 997us/step - loss: 8.8949
Epoch 84/100
1/1 [==============================] - 0s 3ms/step - loss: 8.0877
Epoch 85/100
1/1 [==============================] - 0s 997us/step - loss: 7.3402
Epoch 86/100
1/1 [==============================] - 0s 1ms/step - loss: 6.6636
Epoch 87/100
1/1 [==============================] - 0s 997us/step - loss: 6.0463
Epoch 88/100
1/1 [==============================] - 0s 997us/step - loss: 5.4709
Epoch 89/100
1/1 [==============================] - 0s 996us/step - loss: 4.9378
Epoch 90/100
1/1 [==============================] - 0s 997us/step - loss: 4.4558
Epoch 91/100
1/1 [==============================] - 0s 998us/step - loss: 4.0202
Epoch 92/100
1/1 [==============================] - 0s 0s/step - loss: 3.6177
Epoch 93/100
1/1 [==============================] - 0s 996us/step - loss: 3.2457
Epoch 94/100
1/1 [==============================] - 0s 998us/step - loss: 2.9099
Epoch 95/100
1/1 [==============================] - 0s 993us/step - loss: 2.6088
Epoch 96/100
1/1 [==============================] - 0s 996us/step - loss: 2.3331
Epoch 97/100
1/1 [==============================] - 0s 1ms/step - loss: 2.0792
Epoch 98/100
1/1 [==============================] - 0s 999us/step - loss: 1.8508
Epoch 99/100
1/1 [==============================] - 0s 998us/step - loss: 1.6475
Epoch 100/100
1/1 [==============================] - 0s 1ms/step - loss: 1.4630
###Markdown
EVALUATING THE MODEL
###Code
epochs_hist.history.keys()
plt.plot(epochs_hist.history['loss'])
plt.title('Model Loss Progress During Training')
plt.xlabel('Epoch')
plt.ylabel('Training Loss')
plt.legend(['Training Loss'])
model.get_weights()
# Use the trained model to perform predictions
Temp_C = 0
Temp_F = model.predict([Temp_C])
print('Temperature in degF Using Trained ANN =', Temp_F)
# Let's confirm this Using the equation:
Temp_F = 9/5 * Temp_C + 32
print('Temperature in degF Using Equation =', Temp_F)
###Output
Temperature in degF Using Equation = 32.0
###Markdown
Result After Tuning Hyperparameters:
###Code
model.compile(optimizer=tf.keras.optimizers.Adam(0.3), loss='mean_squared_error')
epochs_hist = model.fit(X_train, y_train, epochs = 100)
plt.plot(epochs_hist.history['loss'])
plt.title('Model Loss Progress During Training')
plt.xlabel('Epoch')
plt.ylabel('Training Loss')
plt.legend(['Training Loss'])
###Output
_____no_output_____ |
solutions/ch_11/exercise_2.ipynb | ###Markdown
Finding Outliers with k-means Setup
###Code
import numpy as np
import pandas as pd
import sqlite3
with sqlite3.connect('../../ch_11/logs/logs.db') as conn:
logs_2018 = pd.read_sql(
"""
SELECT *
FROM logs
WHERE datetime BETWEEN "2018-01-01" AND "2019-01-01";
""",
conn, parse_dates=['datetime'], index_col='datetime'
)
logs_2018.head()
###Output
_____no_output_____
###Markdown
The `get_X()` function from the chapter:
###Code
def get_X(log, day):
"""
Get data we can use for the X
Parameters:
- log: The logs dataframe
- day: A day or single value we can use as a datetime index slice
Returns:
A `pandas.DataFrame` object
"""
return pd.get_dummies(log.loc[day].assign(
failures=lambda x: 1 - x.success
).query('failures > 0').resample('1min').agg(
{'username': 'nunique', 'failures': 'sum'}
).dropna().rename(
columns={'username': 'usernames_with_failures'}
).assign(
day_of_week=lambda x: x.index.dayofweek,
hour=lambda x: x.index.hour
).drop(columns=['failures']), columns=['day_of_week', 'hour'])
###Output
_____no_output_____
###Markdown
Get January 2018 data:
###Code
X = get_X(logs_2018, '2018')
X.columns
###Output
_____no_output_____
###Markdown
k-eansSince we want a "normal" activity cluster and an "anomaly" cluster, we need to make 2 clusters.
###Code
from sklearn.cluster import KMeans
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
kmeans_pipeline = Pipeline([
('scale', StandardScaler()),
('kmeans', KMeans(random_state=0, n_clusters=2))
]).fit(X)
###Output
_____no_output_____
###Markdown
The cluster label doesn't mean anything to us, but we can examine the size of each cluster. We don't expect the clusters to be of equal size because anomalous activity doesn't happen as often as normal activity (we presume).
###Code
preds = kmeans_pipeline.predict(X)
pd.Series(preds).value_counts()
###Output
_____no_output_____
###Markdown
Evaluating the clustering Step 1: Get the true labels
###Code
with sqlite3.connect('../../ch_11/logs/logs.db') as conn:
hackers_2018 = pd.read_sql(
'SELECT * FROM attacks WHERE start BETWEEN "2018-01-01" AND "2019-01-01";',
conn, parse_dates=['start', 'end']
).assign(
duration=lambda x: x.end - x.start,
start_floor=lambda x: x.start.dt.floor('min'),
end_ceil=lambda x: x.end.dt.ceil('min')
)
###Output
_____no_output_____
###Markdown
The `get_y()` function from the chapter:
###Code
def get_y(datetimes, hackers, resolution='1min'):
"""
Get data we can use for the y (whether or not a hacker attempted a log in during that time).
Parameters:
- datetimes: The datetimes to check for hackers
- hackers: The dataframe indicating when the attacks started and stopped
- resolution: The granularity of the datetime. Default is 1 minute.
Returns:
`pandas.Series` of Booleans.
"""
date_ranges = hackers.apply(
lambda x: pd.date_range(x.start_floor, x.end_ceil, freq=resolution),
axis=1
)
dates = pd.Series(dtype='object')
for date_range in date_ranges:
dates = pd.concat([dates, date_range.to_series()])
return datetimes.isin(dates)
###Output
_____no_output_____
###Markdown
Get the true labels:
###Code
is_hacker = get_y(X.reset_index().datetime, hackers_2018)
###Output
_____no_output_____
###Markdown
Step 2: Calculate Fowlkes Mallows ScoreThis indicates percentage of the observations belong to the same cluster in the true labels and in the predicted labels.
###Code
from sklearn.metrics import fowlkes_mallows_score
fowlkes_mallows_score(is_hacker, preds)
###Output
_____no_output_____
###Markdown
Finding Outliers with k-Means Setup
###Code
import numpy as np
import pandas as pd
import sqlite3
with sqlite3.connect('../../ch_11/logs/logs.db') as conn:
logs_2018 = pd.read_sql(
"""
SELECT *
FROM logs
WHERE datetime BETWEEN "2018-01-01" AND "2019-01-01";
""",
conn, parse_dates=['datetime'], index_col='datetime'
)
logs_2018.head()
def get_X(log, day):
"""
Get data we can use for the X
Parameters:
- log: The logs dataframe
- day: A day or single value we can use as a datetime index slice
Returns:
A pandas DataFrame
"""
return pd.get_dummies(log[day].assign(
failures=lambda x: 1 - x.success
).query('failures > 0').resample('1min').agg(
{'username':'nunique', 'failures': 'sum'}
).dropna().rename(
columns={'username':'usernames_with_failures'}
).assign(
day_of_week=lambda x: x.index.dayofweek,
hour=lambda x: x.index.hour
).drop(columns=['failures']), columns=['day_of_week', 'hour'])
X = get_X(logs_2018, '2018')
X.columns
###Output
_____no_output_____
###Markdown
k-MeansSince we want a "normal" activity cluster and an "anomaly" cluster, we need to make 2 clusters.
###Code
from sklearn.cluster import KMeans
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
kmeans_pipeline = Pipeline([
('scale', StandardScaler()),
('kmeans', KMeans(random_state=0, n_clusters=2))
]).fit(X)
###Output
c:\users\molinstefanie\packt\venv\lib\site-packages\sklearn\preprocessing\data.py:645: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
c:\users\molinstefanie\packt\venv\lib\site-packages\sklearn\base.py:464: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler.
return self.fit(X, **fit_params).transform(X)
###Markdown
The cluster label doesn't mean anything to us, but we can examine the size of each cluster. We don't expect the clusters to be of equal size because anomalous activity doesn't happen as often as normal activity (we presume).
###Code
preds = kmeans_pipeline.predict(X)
pd.Series(preds).value_counts()
###Output
c:\users\molinstefanie\packt\venv\lib\site-packages\sklearn\pipeline.py:331: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
###Markdown
Evaluating the clustering Step 1: Get the true labels
###Code
with sqlite3.connect('../../ch_11/logs/logs.db') as conn:
hackers_2018 = pd.read_sql(
'SELECT * FROM attacks WHERE start BETWEEN "2018-01-01" AND "2019-01-01";',
conn, parse_dates=['start', 'end']
).assign(
duration=lambda x: x.end - x.start,
start_floor=lambda x: x.start.dt.floor('min'),
end_ceil=lambda x: x.end.dt.ceil('min')
)
def get_y(datetimes, hackers, resolution='1min'):
"""
Get data we can use for the y (whether or not a hacker attempted a log in during that time).
Parameters:
- datetimes: The datetimes to check for hackers
- hackers: The dataframe indicating when the attacks started and stopped
- resolution: The granularity of the datetime. Default is 1 minute.
Returns:
A pandas Series of booleans.
"""
date_ranges = hackers.apply(
lambda x: pd.date_range(x.start_floor, x.end_ceil, freq=resolution),
axis=1
)
dates = pd.Series()
for date_range in date_ranges:
dates = pd.concat([dates, date_range.to_series()])
return datetimes.isin(dates)
is_hacker = get_y(X.reset_index().datetime, hackers_2018)
###Output
_____no_output_____
###Markdown
Step 2: Calculate Fowlkes Mallows ScoreThis indicates percentage of the observations belong to the same cluster in the true labels and in the predicted labels.
###Code
from sklearn.metrics import fowlkes_mallows_score
fowlkes_mallows_score(is_hacker, preds)
###Output
_____no_output_____ |
Aula03/Exercicios.ipynb | ###Markdown
Exercícios de fixação sobre: Pandas, Matplotlib e SeabornElaborado por <ahref="https://www.linkedin.com/in/bruno-coelho-277519129/">Bruno GomesCoelho, para as aulas do grupo [DATA](https://github.com/icmc-data). Instruções:Siga o passo do notebook, adiconando seu códiga toda vez que ver um ` your code here`.Caso o prazo ainda esteja aberto, (até a próxima aula)submeta sua resposta [nesse googleforms](https://goo.gl/forms/6pXx1AjD4rjES1Tp2) para saber quantas acertou. Carregas bibliotecas úteis e setar padrões
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set() # Essa linha configura para usar os padrões do seaborn, recomendo
###Output
_____no_output_____
###Markdown
Carregar o datasetPara este exercício, usaremos o conjunto de dadosdo [Boston Housing](https://archive.ics.uci.edu/ml/machine-learning-databases/housing/). Com eles temos dados socio-económicos de uma região, e opreço médio das casas daquela região.Para facilitar nossa vida, não vamospegar os dados do UCI e sim direto da biblioteca Scikit-Learn, que usaremos maisa fundo nas próximas aulas. Para instalar ela, uma dessas 3 opções é para darcerto:- pip install scikit-learn- conda install scikit-learn- python3 -m pipinstall scikit-learn- sudo python3 -m pip install scikit-learnSe tudo falhar,de uma olhada no [guia deles](https://scikit-learn.org/stable/install.html) deinstalação ou fale com um dos organizadores.
###Code
# Depois de instalado, essa linha deve rodar sem erro:
from sklearn.datasets import load_boston
# Como dataset vem dividido em features e target,
# vamos juntar os dois num único DataFrame.
boston_data = load_boston()
# Primeiro as features
df = pd.DataFrame(boston_data.data, columns=boston_data.feature_names)
# Agora o nosso target (valor que queremos prever)
df["target"] = boston_data.target
df.head()
###Output
_____no_output_____
###Markdown
Descreva os dados da nossa tabelaDICA: A função que vc quer mostraquantidade, média, std, (..) para cada uma de nossas colunas...
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Com base no resultado da função acima, responda: Q1: Qual feature possui a maior média (mean)?- CRIM- ZN- INDUS- CHAS- NOX- RM- AGE- DIS- RAD- TAX- PTRATIO- B- LSTAT- target Q2: Quantas features possuem todos os valores entre [0, 1] ?**OBS**: Anotação de colchetes indica que os extremos são inclusos.- 0- 1- 2- 3- 4Voltanto agora a codar, responda.... Q3: Essa base possui nulos?- Não- Sim**OBS**: Se possuir nulos, remova eles da base nessa célula abaixo paranão influenciarem no resto do exercício
###Code
# your code here
###Output
_____no_output_____
###Markdown
No nossa conjunto de dados, cada linha representa uma região diferente,com divesas informações daquela região em cada coluna;Nossa coluna "target"contem o preço da médio das casas daquela região.Vale a pena ressaltar que opreço está em milhares de dólares!Vamos analisar as casas que custammais/menos de 20 míl. Q4: Aproximadamente qual porcentagem de regiões possuem um preço maior ou igual a 20 míl doláres?- 32 %- 47 %- 54 %- 63 %
###Code
df_m = df[df['target'] >= 20]
df_l = df[df['target'] < 20]
df_m.count()
#df_l.count()
###Output
_____no_output_____
###Markdown
No nosso conjunt de dados, existe uma feature chamada **CHAS**. Ela indicase aquela região está ou não próxima do [rioCharles](https://en.wikipedia.org/wiki/Charles_River). Vamos verificar que elasó possui dois valores, 0 ou 1.Nesse caso, 1 indica que a região está próximado rio.
###Code
# De fato, só dois valores.
df["CHAS"].value_counts()
###Output
_____no_output_____
###Markdown
Queremos saber se o fato de estar mais perto do rio faz uma diferença nopreço da casa;Para isso, podemos plotar a distribuição do preço ("target"),tanto para quando está próxima do rio, quando para não esta e ver se há umadiferença. Q5: Considerando casas perto e longes do rio, assinale as alternativas corretas:- As casas perto do rio parecem se concentrar mais emvolta de 20 míl doláres do que as longes do rio.- A distribuição de preço paraas casas próximas ao rio parece ser uma [distribuiçãobimodal](https://en.wikipedia.org/wiki/Multimodal_distribution)- Parece quecasas longe do rio tendem a estar distribuidas nas faixas caras (acima de 40míl) mais que as casas perto do rio.**OBS**: Conseguimos fazer isso em duaslinhas! (mas pode quebrar em mais para a indexação ficar mais legível se quiser)**OBS 2**: Lembra de um tal parâmetro de "label" que usamos em algum tipo deplot...?**OBS 3**: Use `hist=False` para ficar mais bonitinho ;)
###Code
proximo = df['CHAS'] == 1
distante = df['CHAS'] == 0
sns.distplot(df[proximo]['target'], hist = False, label = 'P. do rio')
sns.distplot(df[distante]['target'], hist = False, label = 'L. do rio')
###Output
_____no_output_____
###Markdown
Vamos agora considerar um subconjunto de features para analisar as suasdistribuições por quartis.
###Code
subset = df.columns[[2, 4, 5, 6, 8, 10, 12]]
print(subset)
###Output
Index(['INDUS', 'NOX', 'RM', 'AGE', 'RAD', 'PTRATIO', 'LSTAT'], dtype='object')
###Markdown
Plote a distruibuição em quartis para cada uma das colunas definidasacima. Perceba que consegue fazer isso com um `for` se lembrar de chamar`plt.show()` para cada iteração.Dessa forma, sera gerado len(subset) plots, umdebaixo do outro
###Code
for x in subset:
plt.show()
sns.boxplot(df[x])
###Output
_____no_output_____
###Markdown
Agora responda: Q6: Qual feature possui o maior região entre a mediana e o 3o quartil? - INDUS- NOX- RM- AGE- RAD- PTRATIO- LSTAT Q7: Qual feature possui outliers tanto a esquerda como a direita das faixas de valores esperados?- INDUS- NOX- RM- AGE- RAD- PTRATIO- LSTAT**OBS**: Use como definição de[outliers](https://en.wikipedia.org/wiki/Outlier), quem não está na faixa(Q1−1.5⋅IQR, Q3+1.5⋅IQR).Vamos agora considerar um subconjunto de features para analisar as suasdistribuições e correlações
###Code
subset = ['RM', 'AGE', 'DIS', 'B', 'LSTAT', 'target']
###Output
_____no_output_____
###Markdown
Plote as relações entre essas 5 features acimas e responda:
###Code
sns.pairplot(df[subset])
###Output
_____no_output_____
###Markdown
Q8: Qual feature parece ter a grande maioria dos seus valores numa pequena faixa?- RM- AGE- DIS- B- LSTAT- target Aqui acabam as perguntas :)![](https://i.imgflip.com/s5spp.jpg) Como um bonus, vamos agora exemplificar a importância de entender seus dados, e como o trabalho de um cientista de dados pode ser benéfico (ou não) para a sociedade.Sempre que lidamos com dados reais, criando modelos que impactam aspesssoas, temos que ter um cuidado gigante.Nesse nosso pequeno problema porexemplo, a variável **B** possui uma relação com a quantidade de pessoas que seautodeclaram afrodescendente na cidade da região analisada; Perceba que emnenhuma altura da análise, nos preocupamos em como os dados foram adiquiridos ese foram anonimizados; nesse caso os dados não são tão pessoais pois são umamédia de uma região, mas poderia ser um dataset sobre consumidores diferentes,em que informações pessoais são coletadas sem as devidas permissões;Vamos analisar melhor a variável B. Ela é definida como B = 1000(Bk -0.63)^2 onde Bk é a proporção de pessoas autodeclaradas afrodescendente. Combase na imagem abaixo, podemos perceber que valores acima de 150 para avariávelo B indicam um Bk alto: ![](https://storage.googleapis.com/kaggle-forum-message-attachments/inbox/372266/e30bf91054037667a9b69abb600ba97c/Screen%20Shot%202018-09-24%20at%204.47.17%20PM.png)Vamos analisar agora o preço das casas em relação a B. Vamos definir umindex_binário, B > 150.
###Code
low_bk = df["B"] > 150
sns.distplot(df[low_bk]["target"], label="low bk", hist=False)
sns.distplot(df[~low_bk]["target"], label="high bk", hist=False)
###Output
_____no_output_____
###Markdown
Exercícios de fixação sobre: Pandas, Matplotlib e SeabornElaborado por <ahref="https://www.linkedin.com/in/bruno-coelho-277519129/">Bruno GomesCoelho, para as aulas do grupo [DATA](https://github.com/icmc-data). Instruções:Siga o passo do notebook, adiconando seu códiga toda vez que ver um ` your code here`.Caso o prazo ainda esteja aberto, (até a próxima aula)submeta sua resposta [nesse googleforms](https://goo.gl/forms/6pXx1AjD4rjES1Tp2) para saber quantas acertou. Carregas bibliotecas úteis e setar padrões
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set() # Essa linha configura para usar os padrões do seaborn, recomendo
###Output
_____no_output_____
###Markdown
Carregar o datasetPara este exercício, usaremos o conjunto de dadosdo [Boston Housing](https://archive.ics.uci.edu/ml/machine-learning-databases/housing/). Com eles temos dados socio-económicos de uma região, e opreço médio das casas daquela região.Para facilitar nossa vida, não vamospegar os dados do UCI e sim direto da biblioteca Scikit-Learn, que usaremos maisa fundo nas próximas aulas. Para instalar ela, uma dessas 3 opções é para darcerto:- pip install scikit-learn- conda install scikit-learn- python3 -m pipinstall scikit-learn- sudo python3 -m pip install scikit-learnSe tudo falhar,de uma olhada no [guia deles](https://scikit-learn.org/stable/install.html) deinstalação ou fale com um dos organizadores.
###Code
# Depois de instalado, essa linha deve rodar sem erro:
from sklearn.datasets import load_boston
# Como dataset vem dividido em features e target,
# vamos juntar os dois num único DataFrame.
boston_data = load_boston()
# Primeiro as features
df = pd.DataFrame(boston_data.data, columns=boston_data.feature_names)
# Agora o nosso target (valor que queremos prever)
df["target"] = boston_data.target
df.head()
###Output
_____no_output_____
###Markdown
Descreva os dados da nossa tabelaDICA: A função que vc quer mostraquantidade, média, std, (..) para cada uma de nossas colunas...
###Code
# your code here
###Output
_____no_output_____
###Markdown
Com base no resultado da função acima, responda: Q1: Qual feature possui a maior média (mean)?- CRIM- ZN- INDUS- CHAS- NOX- RM- AGE- DIS- RAD- TAX- PTRATIO- B- LSTAT- target Q2: Quantas features possuem todos os valores entre [0, 1] ?**OBS**: Anotação de colchetes indica que os extremos são inclusos.- 0- 1- 2- 3- 4Voltanto agora a codar, responda.... Q3: Essa base possui nulos?- Não- Sim**OBS**: Se possuir nulos, remova eles da base nessa célula abaixo paranão influenciarem no resto do exercício
###Code
# your code here
###Output
_____no_output_____
###Markdown
No nossa conjunto de dados, cada linha representa uma região diferente,com divesas informações daquela região em cada coluna;Nossa coluna "target"contem o preço da médio das casas daquela região.Vale a pena ressaltar que opreço está em milhares de dólares!Vamos analisar as casas que custammais/menos de 20 míl. Q4: Aproximadamente qual porcentagem de regiões possuem um preço maior ou igual a 20 míl doláres?- 32 %- 47 %- 54 %- 63 %
###Code
# your code here
###Output
_____no_output_____
###Markdown
No nosso conjunt de dados, existe uma feature chamada **CHAS**. Ela indicase aquela região está ou não próxima do [rioCharles](https://en.wikipedia.org/wiki/Charles_River). Vamos verificar que elasó possui dois valores, 0 ou 1.Nesse caso, 1 indica que a região está próximado rio.
###Code
# De fato, só dois valores.
df["CHAS"].value_counts()
###Output
_____no_output_____
###Markdown
Queremos saber se o fato de estar mais perto do rio faz uma diferença nopreço da casa;Para isso, podemos plotar a distribuição do preço ("target"),tanto para quando está próxima do rio, quando para não esta e ver se há umadiferença. Q5: Considerando casas perto e longes do rio, assinale as alternativas corretas:- As casas perto do rio parecem se concentrar mais emvolta de 20 míl doláres do que as longes do rio.- A distribuição de preço paraas casas próximas ao rio parece ser uma [distribuiçãobimodal](https://en.wikipedia.org/wiki/Multimodal_distribution)- Parece quecasas longe do rio tendem a estar distribuidas nas faixas caras (acima de 40míl) mais que as casas perto do rio.**OBS**: Conseguimos fazer isso em duaslinhas! (mas pode quebrar em mais para a indexação ficar mais legível se quiser)**OBS 2**: Lembra de um tal parâmetro de "label" que usamos em algum tipo deplot...?**OBS 3**: Use `hist=False` para ficar mais bonitinho ;)
###Code
# your code here
###Output
_____no_output_____
###Markdown
Vamos agora considerar um subconjunto de features para analisar as suasdistribuições por quartis.
###Code
subset = df.columns[[2, 4, 5, 6, 8, 10, 12]]
###Output
_____no_output_____
###Markdown
Plote a distruibuição em quartis para cada uma das colunas definidasacima. Perceba que consegue fazer isso com um `for` se lembrar de chamar`plt.show()` para cada iteração.Dessa forma, sera gerado len(subset) plots, umdebaixo do outro
###Code
# your code here
###Output
_____no_output_____
###Markdown
Agora responda: Q6: Qual feature possui o maior região entre a mediana e o 3o quartil? - INDUS- NOX- RM- AGE- RAD- PTRATIO- LSTAT Q7: Qual feature possui outliers tanto a esquerda como a direita das faixas de valores esperados?- INDUS- NOX- RM- AGE- RAD- PTRATIO- LSTAT**OBS**: Use como definição de[outliers](https://en.wikipedia.org/wiki/Outlier), quem não está na faixa(Q1−1.5⋅IQR, Q3+1.5⋅IQR).Vamos agora considerar um subconjunto de features para analisar as suasdistribuições e correlações
###Code
subset = ['RM', 'AGE', 'DIS', 'B', 'LSTAT', 'target']
###Output
_____no_output_____
###Markdown
Plote as relações entre essas 5 features acimas e responda:
###Code
sns.pairplot(df[subset])
###Output
_____no_output_____
###Markdown
Q8: Qual feature parece ter a grande maioria dos seus valores numa pequena faixa?- RM- AGE- DIS- B- LSTAT- target Aqui acabam as perguntas :)![](https://i.imgflip.com/s5spp.jpg) Como um bonus, vamos agora exemplificar a importância de entender seus dados, e como o trabalho de um cientista de dados pode ser benéfico (ou não) para a sociedade.Sempre que lidamos com dados reais, criando modelos que impactam aspesssoas, temos que ter um cuidado gigante.Nesse nosso pequeno problema porexemplo, a variável **B** possui uma relação com a quantidade de pessoas que seautodeclaram afrodescendente na cidade da região analisada; Perceba que emnenhuma altura da análise, nos preocupamos em como os dados foram adiquiridos ese foram anonimizados; nesse caso os dados não são tão pessoais pois são umamédia de uma região, mas poderia ser um dataset sobre consumidores diferentes,em que informações pessoais são coletadas sem as devidas permissões;Vamos analisar melhor a variável B. Ela é definida como B = 1000(Bk -0.63)^2 onde Bk é a proporção de pessoas autodeclaradas afrodescendente. Combase na imagem abaixo, podemos perceber que valores acima de 150 para avariávelo B indicam um Bk alto: ![](https://storage.googleapis.com/kaggle-forum-message-attachments/inbox/372266/e30bf91054037667a9b69abb600ba97c/Screen%20Shot%202018-09-24%20at%204.47.17%20PM.png)Vamos analisar agora o preço das casas em relação a B. Vamos definir umindex_binário, B > 150.
###Code
low_bk = df["B"] > 150
sns.distplot(df[low_bk]["target"], label="low bk", hist=False)
sns.distplot(df[~low_bk]["target"], label="high bk", hist=False)
###Output
_____no_output_____ |
HR Prediction/Code/HR Prediction__Part_2_Optimize_NumberofLayers.ipynb | ###Markdown
**Setup**
###Code
import os
import re
import zipfile
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import shutil
import matplotlib
import matplotlib.pyplot as plt
from PIL import Image
from torch.utils.data import Dataset
from torchvision import datasets, transforms, models
from sklearn.model_selection import train_test_split
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
import torchvision.datasets as dsets
processedDataset = pd.read_csv("/content/drive/MyDrive/41_softCom_project/training_data.csv")
processedDataset
processedDataset.shape
processedDataset = processedDataset.drop(columns=['enrollee_id'])
processedDataset
#processedDataset['split'] = np.random.randn(processedDataset.shape[0], 1)
msk = np.random.rand(len(processedDataset)) <= 0.9
trainDataset = processedDataset[msk]
testDataset = processedDataset[~msk]
trainDataset.head()
trainDataset.shape
testDataset.head()
testDataset.shape
###Output
_____no_output_____
###Markdown
**Train 90 validation 10 Split**
###Code
NormalizedDataset = np.float32(trainDataset.loc[:, trainDataset.columns != "target"].values)
NormalizedDataset
NormalizedDataset.shape
labels = trainDataset.target.values
labels.shape
train_data, test_data, train_label, test_label = train_test_split(NormalizedDataset, labels, test_size=0.1, random_state=42)
###Output
_____no_output_____
###Markdown
**here test set is the validation set**
###Code
print(len(train_data), len(test_data))
train_data = torch.from_numpy(train_data)
train_label = torch.from_numpy(train_label).type(torch.LongTensor)
test_data = torch.from_numpy(test_data)
test_label = torch.from_numpy(test_label).type(torch.LongTensor)
train_data
train_data.shape
import os
from os import path
import shutil
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import datasets, transforms, models
import matplotlib
import matplotlib.pyplot as plt
from PIL import Image
###Output
_____no_output_____
###Markdown
**Experiments** **Experiment 1**
###Code
# import libraries
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
import torchvision.transforms as transforms
from torch.autograd import Variable
from sklearn.model_selection import train_test_split
import matplotlib
import matplotlib.pyplot as plt
import time
import torch
import torch.nn.functional as F
# import libraries
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
import torchvision.transforms as transforms
from torch.autograd import Variable
from sklearn.model_selection import train_test_split
import matplotlib
import matplotlib.pyplot as plt
import time
import torch
import torch.nn.functional as F
###Output
_____no_output_____
###Markdown
**Setup 1 Neural Network with 24 nodes and 2 hidden layers with ReLU Activation**| Hyper Parameters | Values | | :------------- | :----------: | | batch_size | 100 || num_iters | 10000 || num_features | 12 || output_dim | 2 || learning_rate | 0.001 || Number Of Nodes | 24 || number of hidden Layers | 2 |
###Code
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
input_dim = 12
output_dim = 2
accuracyList = []
losList = []
# Hyperparameters-----------------------------------------------
batch_size = 100
num_iters = 10000
learning_rate = 0.001
num_hidden = 24
#---------------------------------------------------------------
# Device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#
# pytorch train and test dataset
train = torch.utils.data.TensorDataset(train_data, train_label)
test = torch.utils.data.TensorDataset(test_data, test_label)
num_epochs = num_iters / (len(train_data) / batch_size)
num_epochs = int(num_epochs)
train_loader = torch.utils.data.DataLoader(dataset=train,
batch_size=batch_size,
shuffle=True) # It's better to shuffle the whole training dataset!
test_loader = torch.utils.data.DataLoader(dataset=test,
batch_size=batch_size,
shuffle=False)
class DeepNeuralNetworkModel(nn.Module):
def __init__(self, input_size, num_classes, num_hidden):
super().__init__()
### 1st hidden layer: 12 --> 100
self.linear_1 = nn.Linear(input_size, num_hidden)
### Non-linearity in 1st hidden layer
self.relu_1 = nn.ReLU()
### 2nd hidden layer: 100 --> 100
self.linear_2 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 2nd hidden layer
self.relu_2 = nn.ReLU()
### Output layer: 100 --> 2
self.linear_out = nn.Linear(num_hidden, num_classes)
def forward(self, x):
### 1st hidden layer
out = self.linear_1(x)
### Non-linearity in 1st hidden layer
out = self.relu_1(out)
### 2nd hidden layer
out = self.linear_2(out)
### Non-linearity in 2nd hidden layer
out = self.relu_2(out)
# Linear layer (output)
probas = self.linear_out(out)
return probas
model = DeepNeuralNetworkModel(input_size = input_dim,
num_classes = output_dim,
num_hidden = num_hidden)
# To enable GPU
model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
iter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.view(-1, 12).to(device)
labels = labels.to(device)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(images)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
iter += 1
if iter % 500 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
images = images.view(-1, 12).to(device)
# Forward pass only to get logits/output
outputs = model(images)
# Get predictions from the maximum value
_, predicted = torch.max(outputs, 1)
# Total number of labels
total += labels.size(0)
# Total correct predictions
if torch.cuda.is_available():
correct += (predicted.cpu() == labels.cpu()).sum()
else:
correct += (predicted == labels).sum()
accuracy = 100 * correct.item() / total
# Print Loss
accuracyList.append(accuracy)
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.item(), accuracy))
losList.append(loss.item())
###Output
_____no_output_____
###Markdown
Setup 3 Visualization
###Code
import matplotlib
import matplotlib.pyplot as plt
print (losList)
plt.plot(losList)
plt.ylabel('Cross Entropy Loss')
plt.xlabel('Iteration (in every 500)')
plt.show()
import matplotlib
import matplotlib.pyplot as plt
print (accuracyList)
plt.plot(accuracyList)
plt.ylabel('Accuracy')
plt.xlabel('Iteration (in every 500)')
plt.show()
###Output
_____no_output_____
###Markdown
**Setup 2 Neural Network with 100 nodes and 3 hidden layers with ReLU Activation**| Hyper Parameters | Values | | :------------- | :----------: | | batch_size | 100 || num_iters | 10000 || num_features | 12 || output_dim | 2 || learning_rate | 0.001 || Number Of Nodes | 24 || number of hidden Layers | 2 |
###Code
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
input_dim = 12
output_dim = 2
accuracyList = []
losList = []
# Hyperparameters-----------------------------------------------
batch_size = 100
num_iters = 10000
learning_rate = 0.001
num_hidden = 24
#---------------------------------------------------------------
# Device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#
# pytorch train and test dataset
train = torch.utils.data.TensorDataset(train_data, train_label)
test = torch.utils.data.TensorDataset(test_data, test_label)
num_epochs = num_iters / (len(train_data) / batch_size)
num_epochs = int(num_epochs)
train_loader = torch.utils.data.DataLoader(dataset=train,
batch_size=batch_size,
shuffle=True) # It's better to shuffle the whole training dataset!
test_loader = torch.utils.data.DataLoader(dataset=test,
batch_size=batch_size,
shuffle=False)
class DeepNeuralNetworkModel(nn.Module):
def __init__(self, input_size, num_classes, num_hidden):
super().__init__()
### 1st hidden layer: 784 --> 100
self.linear_1 = nn.Linear(input_size, num_hidden)
### Non-linearity in 1st hidden layer
self.relu_1 = nn.ReLU()
### 2nd hidden layer: 100 --> 100
self.linear_2 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 2nd hidden layer
self.relu_2 = nn.ReLU()
### 3rd hidden layer: 100 --> 100
self.linear_3 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 3rd hidden layer
self.relu_3 = nn.ReLU()
### Output layer: 100 --> 10
self.linear_out = nn.Linear(num_hidden, num_classes)
def forward(self, x):
### 1st hidden layer
out = self.linear_1(x)
### Non-linearity in 1st hidden layer
out = self.relu_1(out)
### 2nd hidden layer
out = self.linear_2(out)
### Non-linearity in 2nd hidden layer
out = self.relu_2(out)
### 3rd hidden layer
out = self.linear_3(out)
### Non-linearity in 3rd hidden layer
out = self.relu_3(out)
# Linear layer (output)
probas = self.linear_out(out)
return probas
model = DeepNeuralNetworkModel(input_size = input_dim,
num_classes = output_dim,
num_hidden = num_hidden)
# To enable GPU
model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
iter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.view(-1, 12).to(device)
labels = labels.to(device)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(images)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
iter += 1
if iter % 500 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
images = images.view(-1, 12).to(device)
# Forward pass only to get logits/output
outputs = model(images)
# Get predictions from the maximum value
_, predicted = torch.max(outputs, 1)
# Total number of labels
total += labels.size(0)
# Total correct predictions
if torch.cuda.is_available():
correct += (predicted.cpu() == labels.cpu()).sum()
else:
correct += (predicted == labels).sum()
accuracy = 100 * correct.item() / total
# Print Loss
accuracyList.append(accuracy)
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.item(), accuracy))
losList.append(loss.item())
###Output
_____no_output_____
###Markdown
Setup 3 Visualization
###Code
import matplotlib
import matplotlib.pyplot as plt
print (losList)
plt.plot(losList)
plt.ylabel('Cross Entropy Loss')
plt.xlabel('Iteration (in every 500)')
plt.show()
import matplotlib
import matplotlib.pyplot as plt
print (accuracyList)
plt.plot(accuracyList)
plt.ylabel('Accuracy')
plt.xlabel('Iteration (in every 500)')
plt.show()
###Output
_____no_output_____
###Markdown
**Setup 3 Neural Network with 100 nodes and 5 hidden layers with ReLU Activation**| Hyper Parameters | Values | | :------------- | :----------: | | batch_size | 100 || num_iters | 10000 || num_features | 12 || output_dim | 2 || learning_rate | 0.001 || Number Of Nodes | 24 || number of hidden Layers | 2 |
###Code
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
input_dim = 12
output_dim = 2
accuracyList = []
losList = []
# Hyperparameters-----------------------------------------------
batch_size = 100
num_iters = 10000
learning_rate = 0.01
num_hidden = 100
#---------------------------------------------------------------
# Device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#
# pytorch train and test dataset
train = torch.utils.data.TensorDataset(train_data, train_label)
test = torch.utils.data.TensorDataset(test_data, test_label)
num_epochs = num_iters / (len(train_data) / batch_size)
num_epochs = int(num_epochs)
train_loader = torch.utils.data.DataLoader(dataset=train,
batch_size=batch_size,
shuffle=True) # It's better to shuffle the whole training dataset!
test_loader = torch.utils.data.DataLoader(dataset=test,
batch_size=batch_size,
shuffle=False)
class DeepNeuralNetworkModel(nn.Module):
def __init__(self, input_size, num_classes, num_hidden):
super().__init__()
### 1st hidden layer: 784 --> 100
self.linear_1 = nn.Linear(input_size, num_hidden)
### Non-linearity in 1st hidden layer
self.relu_1 = nn.ReLU()
### 2nd hidden layer: 100 --> 100
self.linear_2 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 2nd hidden layer
self.relu_2 = nn.ReLU()
### 3rd hidden layer: 100 --> 100
self.linear_3 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 3rd hidden layer
self.relu_3 = nn.ReLU()
### 4th hidden layer: 100 --> 100
self.linear_4 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 4th hidden layer
self.relu_4 = nn.ReLU()
### 5th hidden layer: 100 --> 100
self.linear_5 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 5th hidden layer
self.relu_5 = nn.ReLU()
### Output layer: 100 --> 10
self.linear_out = nn.Linear(num_hidden, num_classes)
def forward(self, x):
### 1st hidden layer
out = self.linear_1(x)
### Non-linearity in 1st hidden layer
out = self.relu_1(out)
### 2nd hidden layer
out = self.linear_2(out)
### Non-linearity in 2nd hidden layer
out = self.relu_2(out)
### 3rd hidden layer
out = self.linear_3(out)
### Non-linearity in 3rd hidden layer
out = self.relu_3(out)
### 4th hidden layer
out = self.linear_4(out)
### Non-linearity in 4th hidden layer
out = self.relu_4(out)
### 5th hidden layer
out = self.linear_5(out)
### Non-linearity in 5th hidden layer
out = self.relu_5(out)
# Linear layer (output)
probas = self.linear_out(out)
return probas
model = DeepNeuralNetworkModel(input_size = input_dim,
num_classes = output_dim,
num_hidden = num_hidden)
# To enable GPU
model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
iter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.view(-1, 12).to(device)
labels = labels.to(device)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(images)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
iter += 1
if iter % 500 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
images = images.view(-1, 12).to(device)
# Forward pass only to get logits/output
outputs = model(images)
# Get predictions from the maximum value
_, predicted = torch.max(outputs, 1)
# Total number of labels
total += labels.size(0)
# Total correct predictions
if torch.cuda.is_available():
correct += (predicted.cpu() == labels.cpu()).sum()
else:
correct += (predicted == labels).sum()
accuracy = 100 * correct.item() / total
# Print Loss
accuracyList.append(accuracy)
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.item(), accuracy))
losList.append(loss.item())
###Output
_____no_output_____
###Markdown
Setup 3 Visualization
###Code
import matplotlib
import matplotlib.pyplot as plt
print (losList)
plt.plot(losList)
plt.ylabel('Cross Entropy Loss')
plt.xlabel('Iteration (in every 500)')
plt.show()
import matplotlib
import matplotlib.pyplot as plt
print (accuracyList)
plt.plot(accuracyList)
plt.ylabel('Accuracy')
plt.xlabel('Iteration (in every 500)')
plt.show()
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
**Setup 4 Neural Network with 100 nodes and 7 hidden layers with ReLU Activation**| Hyper Parameters | Values | | :------------- | :----------: | | batch_size | 100 || num_iters | 10000 || num_features | 12 || output_dim | 2 || learning_rate | 0.001 || Number Of Nodes | 24 || number of hidden Layers | 2 |
###Code
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
input_dim = 12
output_dim = 2
accuracyList = []
losList = []
# Hyperparameters-----------------------------------------------
batch_size = 100
num_iters = 10000
learning_rate = 0.001
num_hidden = 24
#---------------------------------------------------------------
# Device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#
# pytorch train and test dataset
train = torch.utils.data.TensorDataset(train_data, train_label)
test = torch.utils.data.TensorDataset(test_data, test_label)
num_epochs = num_iters / (len(train_data) / batch_size)
num_epochs = int(num_epochs)
train_loader = torch.utils.data.DataLoader(dataset=train,
batch_size=batch_size,
shuffle=True) # It's better to shuffle the whole training dataset!
test_loader = torch.utils.data.DataLoader(dataset=test,
batch_size=batch_size,
shuffle=False)
class DeepNeuralNetworkModel(nn.Module):
def __init__(self, input_size, num_classes, num_hidden):
super().__init__()
### 1st hidden layer: 784 --> 100
self.linear_1 = nn.Linear(input_size, num_hidden)
### Non-linearity in 1st hidden layer
self.relu_1 = nn.ReLU()
### 2nd hidden layer: 100 --> 100
self.linear_2 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 2nd hidden layer
self.relu_2 = nn.ReLU()
### 3rd hidden layer: 100 --> 100
self.linear_3 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 3rd hidden layer
self.relu_3 = nn.ReLU()
### 4th hidden layer: 100 --> 100
self.linear_4 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 4th hidden layer
self.relu_4 = nn.ReLU()
### 5th hidden layer: 100 --> 100
self.linear_5 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 5th hidden layer
self.relu_5 = nn.ReLU()
### 6th hidden layer: 100 --> 100
self.linear_6 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 6th hidden layer
self.relu_6 = nn.ReLU()
### 7th hidden layer: 100 --> 100
self.linear_7 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 7th hidden layer
self.relu_7 = nn.ReLU()
### Output layer: 100 --> 10
self.linear_out = nn.Linear(num_hidden, num_classes)
def forward(self, x):
### 1st hidden layer
out = self.linear_1(x)
### Non-linearity in 1st hidden layer
out = self.relu_1(out)
### 2nd hidden layer
out = self.linear_2(out)
### Non-linearity in 2nd hidden layer
out = self.relu_2(out)
### 3rd hidden layer
out = self.linear_3(out)
### Non-linearity in 3rd hidden layer
out = self.relu_3(out)
### 4th hidden layer
out = self.linear_4(out)
### Non-linearity in 4th hidden layer
out = self.relu_4(out)
### 5th hidden layer
out = self.linear_5(out)
### Non-linearity in 5th hidden layer
out = self.relu_5(out)
### 6th hidden layer
out = self.linear_6(out)
### Non-linearity in 6th hidden layer
out = self.relu_6(out)
### 7th hidden layer
out = self.linear_7(out)
### Non-linearity in 7th hidden layer
out = self.relu_7(out)
# Linear layer (output)
probas = self.linear_out(out)
return probas
model = DeepNeuralNetworkModel(input_size = input_dim,
num_classes = output_dim,
num_hidden = num_hidden)
# To enable GPU
model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
iter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.view(-1, 12).to(device)
labels = labels.to(device)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(images)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
iter += 1
if iter % 500 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
images = images.view(-1, 12).to(device)
# Forward pass only to get logits/output
outputs = model(images)
# Get predictions from the maximum value
_, predicted = torch.max(outputs, 1)
# Total number of labels
total += labels.size(0)
# Total correct predictions
if torch.cuda.is_available():
correct += (predicted.cpu() == labels.cpu()).sum()
else:
correct += (predicted == labels).sum()
accuracy = 100 * correct.item() / total
# Print Loss
accuracyList.append(accuracy)
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.item(), accuracy))
losList.append(loss.item())
###Output
_____no_output_____
###Markdown
Setup 3 Visualization
###Code
import matplotlib
import matplotlib.pyplot as plt
print (losList)
plt.plot(losList)
plt.ylabel('Cross Entropy Loss')
plt.xlabel('Iteration (in every 500)')
plt.show()
import matplotlib
import matplotlib.pyplot as plt
print (accuracyList)
plt.plot(accuracyList)
plt.ylabel('Accuracy')
plt.xlabel('Iteration (in every 500)')
plt.show()
###Output
_____no_output_____
###Markdown
**Setup 5 Neural Network with 100 nodes and 9 hidden layers with ReLU Activation**| Hyper Parameters | Values | | :------------- | :----------: | | batch_size | 100 || num_iters | 10000 || num_features | 12 || output_dim | 2 || learning_rate | 0.001 || Number Of Nodes | 24 || number of hidden Layers | 2 |
###Code
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
input_dim = 12
output_dim = 2
accuracyList = []
losList = []
# Hyperparameters-----------------------------------------------
batch_size = 100
num_iters = 10000
learning_rate = 0.001
num_hidden = 24
#---------------------------------------------------------------
# Device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#
# pytorch train and test dataset
train = torch.utils.data.TensorDataset(train_data, train_label)
test = torch.utils.data.TensorDataset(test_data, test_label)
num_epochs = num_iters / (len(train_data) / batch_size)
num_epochs = int(num_epochs)
train_loader = torch.utils.data.DataLoader(dataset=train,
batch_size=batch_size,
shuffle=True) # It's better to shuffle the whole training dataset!
test_loader = torch.utils.data.DataLoader(dataset=test,
batch_size=batch_size,
shuffle=False)
class DeepNeuralNetworkModel(nn.Module):
def __init__(self, input_size, num_classes, num_hidden):
super().__init__()
### 1st hidden layer: 784 --> 100
self.linear_1 = nn.Linear(input_size, num_hidden)
### Non-linearity in 1st hidden layer
self.relu_1 = nn.ReLU()
### 2nd hidden layer: 100 --> 100
self.linear_2 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 2nd hidden layer
self.relu_2 = nn.ReLU()
### 3rd hidden layer: 100 --> 100
self.linear_3 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 3rd hidden layer
self.relu_3 = nn.ReLU()
### 4th hidden layer: 100 --> 100
self.linear_4 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 4th hidden layer
self.relu_4 = nn.ReLU()
### 5th hidden layer: 100 --> 100
self.linear_5 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 5th hidden layer
self.relu_5 = nn.ReLU()
### 6th hidden layer: 100 --> 100
self.linear_6 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 6th hidden layer
self.relu_6 = nn.ReLU()
### 7th hidden layer: 100 --> 100
self.linear_7 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 7th hidden layer
self.relu_7 = nn.ReLU()
### 8th hidden layer: 100 --> 100
self.linear_8 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 8th hidden layer
self.relu_8 = nn.ReLU()
### 9th hidden layer: 100 --> 100
self.linear_9 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 9th hidden layer
self.relu_9 = nn.ReLU()
### Output layer: 100 --> 10
self.linear_out = nn.Linear(num_hidden, num_classes)
def forward(self, x):
### 1st hidden layer
out = self.linear_1(x)
### Non-linearity in 1st hidden layer
out = self.relu_1(out)
### 2nd hidden layer
out = self.linear_2(out)
### Non-linearity in 2nd hidden layer
out = self.relu_2(out)
### 3rd hidden layer
out = self.linear_3(out)
### Non-linearity in 3rd hidden layer
out = self.relu_3(out)
### 4th hidden layer
out = self.linear_4(out)
### Non-linearity in 4th hidden layer
out = self.relu_4(out)
### 5th hidden layer
out = self.linear_5(out)
### Non-linearity in 5th hidden layer
out = self.relu_5(out)
### 6th hidden layer
out = self.linear_6(out)
### Non-linearity in 6th hidden layer
out = self.relu_6(out)
### 7th hidden layer
out = self.linear_7(out)
### Non-linearity in 7th hidden layer
out = self.relu_7(out)
### 8th hidden layer
out = self.linear_8(out)
### Non-linearity in 8th hidden layer
out = self.relu_8(out)
### 9th hidden layer
out = self.linear_9(out)
### Non-linearity in 9th hidden layer
out = self.relu_9(out)
# Linear layer (output)
probas = self.linear_out(out)
return probas
model = DeepNeuralNetworkModel(input_size = input_dim,
num_classes = output_dim,
num_hidden = num_hidden)
# To enable GPU
model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
iter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.view(-1, 12).to(device)
labels = labels.to(device)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(images)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
iter += 1
if iter % 500 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
images = images.view(-1, 12).to(device)
# Forward pass only to get logits/output
outputs = model(images)
# Get predictions from the maximum value
_, predicted = torch.max(outputs, 1)
# Total number of labels
total += labels.size(0)
# Total correct predictions
if torch.cuda.is_available():
correct += (predicted.cpu() == labels.cpu()).sum()
else:
correct += (predicted == labels).sum()
accuracy = 100 * correct.item() / total
# Print Loss
accuracyList.append(accuracy)
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.item(), accuracy))
losList.append(loss.item())
###Output
_____no_output_____
###Markdown
Setup 3 Visualization
###Code
import matplotlib
import matplotlib.pyplot as plt
print (losList)
plt.plot(losList)
plt.ylabel('Cross Entropy Loss')
plt.xlabel('Iteration (in every 500)')
plt.show()
import matplotlib
import matplotlib.pyplot as plt
print (accuracyList)
plt.plot(accuracyList)
plt.ylabel('Accuracy')
plt.xlabel('Iteration (in every 500)')
plt.show()
###Output
_____no_output_____
###Markdown
**Setup 6 Neural Network with 24 nodes and 9 hidden layers with ReLU Activation**| Hyper Parameters | Values | | :------------- | :----------: | | batch_size | 200 || num_iters | 10000 || num_features | 12 || output_dim | 2 || learning_rate | 0.03 || Number Of Nodes | 24 || number of hidden Layers | 2 |
###Code
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
input_dim = 12
output_dim = 2
accuracyList = []
losList = []
# Hyperparameters-----------------------------------------------
batch_size = 200
num_iters = 10000
learning_rate = 0.03
num_hidden = 24
#---------------------------------------------------------------
# Device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#
# pytorch train and test dataset
train = torch.utils.data.TensorDataset(train_data, train_label)
test = torch.utils.data.TensorDataset(test_data, test_label)
num_epochs = num_iters / (len(train_data) / batch_size)
num_epochs = int(num_epochs)
train_loader = torch.utils.data.DataLoader(dataset=train,
batch_size=batch_size,
shuffle=True) # It's better to shuffle the whole training dataset!
test_loader = torch.utils.data.DataLoader(dataset=test,
batch_size=batch_size,
shuffle=False)
class DeepNeuralNetworkModel(nn.Module):
def __init__(self, input_size, num_classes, num_hidden):
super().__init__()
### 1st hidden layer: 784 --> 100
self.linear_1 = nn.Linear(input_size, num_hidden)
### Non-linearity in 1st hidden layer
self.relu_1 = nn.ReLU()
### 2nd hidden layer: 100 --> 100
self.linear_2 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 2nd hidden layer
self.relu_2 = nn.ReLU()
### 3rd hidden layer: 100 --> 100
self.linear_3 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 3rd hidden layer
self.relu_3 = nn.ReLU()
### 4th hidden layer: 100 --> 100
self.linear_4 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 4th hidden layer
self.relu_4 = nn.ReLU()
### 5th hidden layer: 100 --> 100
self.linear_5 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 5th hidden layer
self.relu_5 = nn.ReLU()
### 6th hidden layer: 100 --> 100
self.linear_6 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 6th hidden layer
self.relu_6 = nn.ReLU()
### 7th hidden layer: 100 --> 100
self.linear_7 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 7th hidden layer
self.relu_7 = nn.ReLU()
### 8th hidden layer: 100 --> 100
self.linear_8 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 8th hidden layer
self.relu_8 = nn.ReLU()
### 9th hidden layer: 100 --> 100
self.linear_9 = nn.Linear(num_hidden, num_hidden)
### Non-linearity in 9th hidden layer
self.relu_9 = nn.ReLU()
### Output layer: 100 --> 10
self.linear_out = nn.Linear(num_hidden, num_classes)
def forward(self, x):
### 1st hidden layer
out = self.linear_1(x)
### Non-linearity in 1st hidden layer
out = self.relu_1(out)
### 2nd hidden layer
out = self.linear_2(out)
### Non-linearity in 2nd hidden layer
out = self.relu_2(out)
### 3rd hidden layer
out = self.linear_3(out)
### Non-linearity in 3rd hidden layer
out = self.relu_3(out)
### 4th hidden layer
out = self.linear_4(out)
### Non-linearity in 4th hidden layer
out = self.relu_4(out)
### 5th hidden layer
out = self.linear_5(out)
### Non-linearity in 5th hidden layer
out = self.relu_5(out)
### 6th hidden layer
out = self.linear_6(out)
### Non-linearity in 6th hidden layer
out = self.relu_6(out)
### 7th hidden layer
out = self.linear_7(out)
### Non-linearity in 7th hidden layer
out = self.relu_7(out)
### 8th hidden layer
out = self.linear_8(out)
### Non-linearity in 8th hidden layer
out = self.relu_8(out)
### 9th hidden layer
out = self.linear_9(out)
### Non-linearity in 9th hidden layer
out = self.relu_9(out)
# Linear layer (output)
probas = self.linear_out(out)
return probas
model = DeepNeuralNetworkModel(input_size = input_dim,
num_classes = output_dim,
num_hidden = num_hidden)
# To enable GPU
model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
iter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.view(-1, 12).to(device)
labels = labels.to(device)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(images)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
iter += 1
if iter % 500 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
images = images.view(-1, 12).to(device)
# Forward pass only to get logits/output
outputs = model(images)
# Get predictions from the maximum value
_, predicted = torch.max(outputs, 1)
# Total number of labels
total += labels.size(0)
# Total correct predictions
if torch.cuda.is_available():
correct += (predicted.cpu() == labels.cpu()).sum()
else:
correct += (predicted == labels).sum()
accuracy = 100 * correct.item() / total
# Print Loss
accuracyList.append(accuracy)
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.item(), accuracy))
losList.append(loss.item())
###Output
_____no_output_____
###Markdown
Setup 3 Visualization
###Code
import matplotlib
import matplotlib.pyplot as plt
print (losList)
plt.plot(losList)
plt.ylabel('Cross Entropy Loss')
plt.xlabel('Iteration (in every 500)')
plt.show()
import matplotlib
import matplotlib.pyplot as plt
print (accuracyList)
plt.plot(accuracyList)
plt.ylabel('Accuracy')
plt.xlabel('Iteration (in every 500)')
plt.show()
###Output
_____no_output_____ |
docs/source/notebooks/GLM-negative-binomial-regression.ipynb | ###Markdown
GLM: Negative Binomial RegressionThis notebook demos negative binomial regression using the `glm` submodule. It closely follows the GLM Poisson regression example by [Jonathan Sedar](https://github.com/jonsedar) (which is in turn insipired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/)) except the data here is negative binomially distributed instead of Poisson distributed.Negative binomial regression is used to model count data for which the variance is higher than the mean. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) can be thought of as a Poisson distribution whose rate parameter is gamma distributed, so that rate parameter can be adjusted to account for the increased variance. Contents+ [Setup](Setup) + [Convenience Functions](Convenience-Functions) + [Generate Data](Generate-Data) + [Poisson Data](Poisson-Data) + [Negative Binomial Data](Negative-Binomial-Data) + [Visualize the Data](Visualize-the-Data)+ [Negative Binomial Regression](Negative-Binomial-Regression) + [Create GLM Model](Create-GLM-Model) + [View Results](View-Results) Setup
###Code
import numpy as np
import pandas as pd
import pymc3 as pm
from scipy import stats
from scipy import optimize
import matplotlib.pyplot as plt
import seaborn as sns
import re
%matplotlib inline
###Output
_____no_output_____
###Markdown
Convenience FunctionsTaken from the Poisson regression example.
###Code
def plot_traces(trcs, varnames=None):
'''Plot traces with overlaid means and values'''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4),
lines={k: v['mean'] for k, v in
pm.df_summary(trcs,varnames=varnames).iterrows()})
for i, mn in enumerate(pm.df_summary(trcs, varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data',
xytext=(5,10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
def strip_derived_rvs(rvs):
'''Remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
ret_rvs.append(rv)
return ret_rvs
###Output
_____no_output_____
###Markdown
Generate DataAs in the Poisson regression example, we assume that sneezing occurs at some baseline rate, and that consuming alcohol, not taking antihistamines, or doing both, increase its frequency. Poisson DataFirst, let's look at some Poisson distributed data from the Poisson regression example.
###Code
np.random.seed(123)
# Mean Poisson values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# Create samples
q = 1000
df_pois = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df_pois.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
Since the mean and variance of a Poisson distributed random variable are equal, the sample means and variances are very close. Negative Binomial DataNow, suppose every subject in the dataset had the flu, increasing the variance of their sneezing (and causing an unfortunate few to sneeze over 70 times a day). If the mean number of sneezes stays the same but variance increases, the data might follow a negative binomial distribution.
###Code
# Gamma shape parameter
alpha = 10
def get_nb_vals(mu, alpha, size):
"""Generate negative binomially distributed samples by
drawing a sample from a gamma distribution with mean `mu` and
shape parameter `alpha', then drawing from a Poisson
distribution whose rate parameter is given by the sampled
gamma variable.
"""
g = stats.gamma.rvs(alpha, scale=mu / alpha, size=size)
return stats.poisson.rvs(g)
# Create samples
n = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((get_nb_vals(theta_noalcohol_meds, alpha, n),
get_nb_vals(theta_alcohol_meds, alpha, n),
get_nb_vals(theta_noalcohol_nomeds, alpha, n),
get_nb_vals(theta_alcohol_nomeds, alpha, n))),
'alcohol': np.concatenate((np.repeat(False, n),
np.repeat(True, n),
np.repeat(False, n),
np.repeat(True, n))),
'nomeds': np.concatenate((np.repeat(False, n),
np.repeat(False, n),
np.repeat(True, n),
np.repeat(True, n)))})
df.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
As in the Poisson regression example, we see that drinking alcohol and/or not taking antihistamines increase the sneezing rate to varying degrees. Unlike in that example, for each combination of `alcohol` and `nomeds`, the variance of `nsneeze` is higher than the mean. This suggests that a Poisson distrubution would be a poor fit for the data since the mean and variance of a Poisson distribution are equal. Visualize the Data
###Code
g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df, kind='count', aspect=1.5)
# Make x-axis ticklabels less crowded
ax = g.axes[1, 0]
labels = range(len(ax.get_xticklabels(which='both')))
ax.set_xticks(labels[::5])
ax.set_xticklabels(labels[::5]);
###Output
_____no_output_____
###Markdown
Negative Binomial Regression Create GLM Model
###Code
fml = 'nsneeze ~ alcohol + nomeds + alcohol:nomeds'
with pm.Model() as model:
pm.glm.GLM.from_formula(formula=fml, data=df, family=pm.glm.families.NegativeBinomial())
# Old initialization
# start = pm.find_MAP(fmin=optimize.fmin_powell)
# C = pm.approx_hessian(start)
# trace = pm.sample(4000, step=pm.NUTS(scaling=C))
trace = pm.sample(2000, cores=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using ADVI...
Average Loss = 10,338: 7%|▋ | 13828/200000 [00:22<06:19, 490.54it/s]
Convergence archived at 13900
Interrupted at 13,900 [6%]: Average Loss = 14,046
100%|██████████| 2500/2500 [11:04<00:00, 2.93it/s]
###Markdown
View Results
###Code
rvs = [rv.name for rv in strip_derived_rvs(model.unobserved_RVs)]
plot_traces(trace[1000:], varnames=rvs);
# Transform coefficients to recover parameter values
np.exp(pm.df_summary(trace[1000:], varnames=rvs)[['mean','hpd_2.5','hpd_97.5']])
###Output
_____no_output_____
###Markdown
The mean values are close to the values we specified when generating the data:- The base rate is a constant 1.- Drinking alcohol triples the base rate.- Not taking antihistamines increases the base rate by 6 times.- Drinking alcohol and not taking antihistamines doubles the rate that would be expected if their rates were independent. If they were independent, then doing both would increase the base rate by 3\*6=18 times, but instead the base rate is increased by 3\*6\*2=16 times.Finally, even though the sample for `mu` is highly skewed, its median value is close to the sample mean, and the mean of `alpha` is also quite close to its actual value of 10.
###Code
np.percentile(trace[1000:]['mu'], [25,50,75])
df.nsneeze.mean()
trace[1000:]['alpha'].mean()
###Output
_____no_output_____
###Markdown
GLM: Negative Binomial Regression
###Code
import re
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
from scipy import stats
print('Running on PyMC3 v{}'.format(pm.__version__))
%config InlineBackend.figure_format = 'retina'
az.style.use('arviz-darkgrid')
###Output
_____no_output_____
###Markdown
This notebook demos negative binomial regression using the `glm` submodule. It closely follows the GLM Poisson regression example by [Jonathan Sedar](https://github.com/jonsedar) (which is in turn inspired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/)) except the data here is negative binomially distributed instead of Poisson distributed.Negative binomial regression is used to model count data for which the variance is higher than the mean. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) can be thought of as a Poisson distribution whose rate parameter is gamma distributed, so that rate parameter can be adjusted to account for the increased variance. Convenience FunctionsTaken from the Poisson regression example.
###Code
def plot_traces(trcs, varnames=None):
'''Plot traces with overlaid means and values'''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, var_names=varnames, figsize=(12, nrows*1.4),
lines=tuple([(k, {}, v['mean'])
for k, v in pm.summary(trcs, varnames=varnames).iterrows()]))
for i, mn in enumerate(pm.summary(trcs, varnames=varnames)['mean']):
ax[i, 0].annotate('{:.2f}'.format(mn), xy=(mn, 0), xycoords='data',
xytext=(5, 10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
def strip_derived_rvs(rvs):
'''Remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log', rv.name) or re.search('_interval', rv.name)):
ret_rvs.append(rv)
return ret_rvs
###Output
_____no_output_____
###Markdown
Generate DataAs in the Poisson regression example, we assume that sneezing occurs at some baseline rate, and that consuming alcohol, not taking antihistamines, or doing both, increase its frequency. Poisson DataFirst, let's look at some Poisson distributed data from the Poisson regression example.
###Code
np.random.seed(123)
# Mean Poisson values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# Create samples
q = 1000
df_pois = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df_pois.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
Since the mean and variance of a Poisson distributed random variable are equal, the sample means and variances are very close. Negative Binomial DataNow, suppose every subject in the dataset had the flu, increasing the variance of their sneezing (and causing an unfortunate few to sneeze over 70 times a day). If the mean number of sneezes stays the same but variance increases, the data might follow a negative binomial distribution.
###Code
# Gamma shape parameter
alpha = 10
def get_nb_vals(mu, alpha, size):
"""Generate negative binomially distributed samples by
drawing a sample from a gamma distribution with mean `mu` and
shape parameter `alpha', then drawing from a Poisson
distribution whose rate parameter is given by the sampled
gamma variable.
"""
g = stats.gamma.rvs(alpha, scale=mu / alpha, size=size)
return stats.poisson.rvs(g)
# Create samples
n = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((get_nb_vals(theta_noalcohol_meds, alpha, n),
get_nb_vals(theta_alcohol_meds, alpha, n),
get_nb_vals(theta_noalcohol_nomeds, alpha, n),
get_nb_vals(theta_alcohol_nomeds, alpha, n))),
'alcohol': np.concatenate((np.repeat(False, n),
np.repeat(True, n),
np.repeat(False, n),
np.repeat(True, n))),
'nomeds': np.concatenate((np.repeat(False, n),
np.repeat(False, n),
np.repeat(True, n),
np.repeat(True, n)))})
df.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
As in the Poisson regression example, we see that drinking alcohol and/or not taking antihistamines increase the sneezing rate to varying degrees. Unlike in that example, for each combination of `alcohol` and `nomeds`, the variance of `nsneeze` is higher than the mean. This suggests that a Poisson distrubution would be a poor fit for the data since the mean and variance of a Poisson distribution are equal. Visualize the Data
###Code
g = sns.catplot(x='nsneeze', row='nomeds', col='alcohol', data=df, kind='count', aspect=1.5)
# Make x-axis ticklabels less crowded
ax = g.axes[1, 0]
labels = range(len(ax.get_xticklabels(which='both')))
ax.set_xticks(labels[::5])
ax.set_xticklabels(labels[::5]);
###Output
_____no_output_____
###Markdown
Negative Binomial Regression Create GLM Model
###Code
fml = 'nsneeze ~ alcohol + nomeds + alcohol:nomeds'
with pm.Model() as model:
pm.glm.GLM.from_formula(formula=fml, data=df, family=pm.glm.families.NegativeBinomial())
trace = pm.sample(1000, tune=1000, cores=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [alpha, mu, alcohol[T.True]:nomeds[T.True], nomeds[T.True], alcohol[T.True], Intercept]
Sampling 2 chains: 100%|██████████| 4000/4000 [01:08<00:00, 58.46draws/s]
The number of effective samples is smaller than 25% for some parameters.
###Markdown
View Results
###Code
varnames = [rv.name for rv in strip_derived_rvs(model.unobserved_RVs)]
plot_traces(trace, varnames=varnames);
# Transform coefficients to recover parameter values
np.exp(pm.summary(trace, varnames=varnames)[['mean','hpd_2.5','hpd_97.5']])
###Output
_____no_output_____
###Markdown
The mean values are close to the values we specified when generating the data:- The base rate is a constant 1.- Drinking alcohol triples the base rate.- Not taking antihistamines increases the base rate by 6 times.- Drinking alcohol and not taking antihistamines doubles the rate that would be expected if their rates were independent. If they were independent, then doing both would increase the base rate by 3\*6=18 times, but instead the base rate is increased by 3\*6\*2=16 times.Finally, even though the sample for `mu` is highly skewed, its median value is close to the sample mean, and the mean of `alpha` is also quite close to its actual value of 10.
###Code
np.percentile(trace['mu'], [25,50,75])
df.nsneeze.mean()
trace['alpha'].mean()
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
pymc3 3.8
arviz 0.8.3
numpy 1.17.5
last updated: Thu Jun 11 2020
CPython 3.8.2
IPython 7.11.0
watermark 2.0.2
###Markdown
GLM: Negative Binomial RegressionThis notebook demos negative binomial regression using the `glm` submodule. It closely follows the GLM Poisson regression example by [Jonathan Sedar](https://github.com/jonsedar) (which is in turn insipired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/)) except the data here is negative binomially distributed instead of Poisson distributed.Negative binomial regression is used to model count data for which the variance is higher than the mean. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) can be thought of as a Poisson distribution whose rate parameter is gamma distributed, so that rate parameter can be adjusted to account for the increased variance. Contents+ [Setup](Setup) + [Convenience Functions](Convenience-Functions) + [Generate Data](Generate-Data) + [Poisson Data](Poisson-Data) + [Negative Binomial Data](Negative-Binomial-Data) + [Visualize the Data](Visualize-the-Data)+ [Negative Binomial Regression](Negative-Binomial-Regression) + [Create GLM Model](Create-GLM-Model) + [View Results](View-Results) Setup
###Code
import numpy as np
import pandas as pd
import pymc3 as pm
from scipy import stats
from scipy import optimize
import matplotlib.pyplot as plt
import seaborn as sns
import re
%matplotlib inline
###Output
_____no_output_____
###Markdown
Convenience FunctionsTaken from the Poisson regression example.
###Code
def plot_traces(trcs, varnames=None):
'''Plot traces with overlaid means and values'''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4),
lines={k: v['mean'] for k, v in
pm.df_summary(trcs,varnames=varnames).iterrows()})
for i, mn in enumerate(pm.df_summary(trcs, varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data',
xytext=(5,10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
def strip_derived_rvs(rvs):
'''Remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
ret_rvs.append(rv)
return ret_rvs
###Output
_____no_output_____
###Markdown
Generate DataAs in the Poisson regression example, we assume that sneezing occurs at some baseline rate, and that consuming alcohol, not taking antihistamines, or doing both, increase its frequency. Poisson DataFirst, let's look at some Poisson distributed data from the Poisson regression example.
###Code
np.random.seed(123)
# Mean Poisson values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# Create samples
q = 1000
df_pois = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df_pois.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
Since the mean and variance of a Poisson distributed random variable are equal, the sample means and variances are very close. Negative Binomial DataNow, suppose every subject in the dataset had the flu, increasing the variance of their sneezing (and causing an unfortunate few to sneeze over 70 times a day). If the mean number of sneezes stays the same but variance increases, the data might follow a negative binomial distribution.
###Code
# Gamma shape parameter
alpha = 10
def get_nb_vals(mu, alpha, size):
"""Generate negative binomially distributed samples by
drawing a sample from a gamma distribution with mean `mu` and
shape parameter `alpha', then drawing from a Poisson
distribution whose rate parameter is given by the sampled
gamma variable.
"""
g = stats.gamma.rvs(alpha, scale=mu / alpha, size=size)
return stats.poisson.rvs(g)
# Create samples
n = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((get_nb_vals(theta_noalcohol_meds, alpha, n),
get_nb_vals(theta_alcohol_meds, alpha, n),
get_nb_vals(theta_noalcohol_nomeds, alpha, n),
get_nb_vals(theta_alcohol_nomeds, alpha, n))),
'alcohol': np.concatenate((np.repeat(False, n),
np.repeat(True, n),
np.repeat(False, n),
np.repeat(True, n))),
'nomeds': np.concatenate((np.repeat(False, n),
np.repeat(False, n),
np.repeat(True, n),
np.repeat(True, n)))})
df.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
As in the Poisson regression example, we see that drinking alcohol and/or not taking antihistamines increase the sneezing rate to varying degrees. Unlike in that example, for each combination of `alcohol` and `nomeds`, the variance of `nsneeze` is higher than the mean. This suggests that a Poisson distrubution would be a poor fit for the data since the mean and variance of a Poisson distribution are equal. Visualize the Data
###Code
g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df, kind='count', aspect=1.5)
# Make x-axis ticklabels less crowded
ax = g.axes[1, 0]
labels = range(len(ax.get_xticklabels(which='both')))
ax.set_xticks(labels[::5])
ax.set_xticklabels(labels[::5]);
###Output
_____no_output_____
###Markdown
Negative Binomial Regression Create GLM Model
###Code
fml = 'nsneeze ~ alcohol + nomeds + alcohol:nomeds'
with pm.Model() as model:
pm.glm.glm(formula=fml, data=df, family=pm.glm.families.NegativeBinomial())
# This initialization seems to improve mixing
start = pm.find_MAP(fmin=optimize.fmin_powell)
C = pm.approx_hessian(start)
trace = pm.sample(4000, step=pm.NUTS(scaling=C))
###Output
Optimization terminated successfully.
Current function value: 9825.951700
Iterations: 16
Function evaluations: 1086
###Markdown
View Results
###Code
rvs = [rv.name for rv in strip_derived_rvs(model.unobserved_RVs)]
plot_traces(trace[1000:], varnames=rvs);
# Transform coefficients to recover parameter values
np.exp(pm.df_summary(trace[1000:], varnames=rvs)[['mean','hpd_2.5','hpd_97.5']])
###Output
_____no_output_____
###Markdown
The mean values are close to the values we specified when generating the data:- The base rate is a constant 1.- Drinking alcohol triples the base rate.- Not taking antihistamines increases the base rate by 6 times.- Drinking alcohol and not taking antihistamines doubles the rate that would be expected if their rates were independent. If they were independent, then doing both would increase the base rate by 3\*6=18 times, but instead the base rate is increased by 3\*6\*2=16 times.Finally, even though the sample for `mu` is highly skewed, its median value is close to the sample mean, and the mean of `alpha` is also quite close to its actual value of 10.
###Code
np.percentile(trace[1000:]['mu'], [25,50,75])
df.nsneeze.mean()
trace[1000:]['alpha'].mean()
###Output
_____no_output_____
###Markdown
GLM: Negative Binomial RegressionThis notebook demos negative binomial regression using the `glm` submodule. It closely follows the GLM Poisson regression example by [Jonathan Sedar](https://github.com/jonsedar) (which is in turn insipired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/)) except the data here is negative binomially distributed instead of Poisson distributed.Negative binomial regression is used to model count data for which the variance is higher than the mean. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) can be thought of as a Poisson distribution whose rate parameter is gamma distributed, so that rate parameter can be adjusted to account for the increased variance. Contents+ [Setup](Setup) + [Convenience Functions](Convenience-Functions) + [Generate Data](Generate-Data) + [Poisson Data](Poisson-Data) + [Negative Binomial Data](Negative-Binomial-Data) + [Visualize the Data](Visualize-the-Data)+ [Negative Binomial Regression](Negative-Binomial-Regression) + [Create GLM Model](Create-GLM-Model) + [View Results](View-Results) Setup
###Code
import numpy as np
import pandas as pd
import pymc3 as pm
from scipy import stats
from scipy import optimize
import matplotlib.pyplot as plt
import seaborn as sns
import re
%matplotlib inline
###Output
_____no_output_____
###Markdown
Convenience FunctionsTaken from the Poisson regression example.
###Code
def plot_traces(trcs, varnames=None):
'''Plot traces with overlaid means and values'''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4),
lines={k: v['mean'] for k, v in
pm.df_summary(trcs,varnames=varnames).iterrows()})
for i, mn in enumerate(pm.df_summary(trcs, varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data',
xytext=(5,10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
def strip_derived_rvs(rvs):
'''Remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
ret_rvs.append(rv)
return ret_rvs
###Output
_____no_output_____
###Markdown
Generate DataAs in the Poisson regression example, we assume that sneezing occurs at some baseline rate, and that consuming alcohol, not taking antihistamines, or doing both, increase its frequency. Poisson DataFirst, let's look at some Poisson distributed data from the Poisson regression example.
###Code
np.random.seed(123)
# Mean Poisson values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# Create samples
q = 1000
df_pois = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df_pois.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
Since the mean and variance of a Poisson distributed random variable are equal, the sample means and variances are very close. Negative Binomial DataNow, suppose every subject in the dataset had the flu, increasing the variance of their sneezing (and causing an unfortunate few to sneeze over 70 times a day). If the mean number of sneezes stays the same but variance increases, the data might follow a negative binomial distribution.
###Code
# Gamma shape parameter
alpha = 10
def get_nb_vals(mu, alpha, size):
"""Generate negative binomially distributed samples by
drawing a sample from a gamma distribution with mean `mu` and
shape parameter `alpha', then drawing from a Poisson
distribution whose rate parameter is given by the sampled
gamma variable.
"""
g = stats.gamma.rvs(alpha, scale=mu / alpha, size=size)
return stats.poisson.rvs(g)
# Create samples
n = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((get_nb_vals(theta_noalcohol_meds, alpha, n),
get_nb_vals(theta_alcohol_meds, alpha, n),
get_nb_vals(theta_noalcohol_nomeds, alpha, n),
get_nb_vals(theta_alcohol_nomeds, alpha, n))),
'alcohol': np.concatenate((np.repeat(False, n),
np.repeat(True, n),
np.repeat(False, n),
np.repeat(True, n))),
'nomeds': np.concatenate((np.repeat(False, n),
np.repeat(False, n),
np.repeat(True, n),
np.repeat(True, n)))})
df.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
As in the Poisson regression example, we see that drinking alcohol and/or not taking antihistamines increase the sneezing rate to varying degrees. Unlike in that example, for each combination of `alcohol` and `nomeds`, the variance of `nsneeze` is higher than the mean. This suggests that a Poisson distrubution would be a poor fit for the data since the mean and variance of a Poisson distribution are equal. Visualize the Data
###Code
g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df, kind='count', aspect=1.5)
# Make x-axis ticklabels less crowded
ax = g.axes[1, 0]
labels = range(len(ax.get_xticklabels(which='both')))
ax.set_xticks(labels[::5])
ax.set_xticklabels(labels[::5]);
###Output
_____no_output_____
###Markdown
Negative Binomial Regression Create GLM Model
###Code
fml = 'nsneeze ~ alcohol + nomeds + alcohol:nomeds'
with pm.Model() as model:
pm.glm.GLM.from_formula(formula=fml, data=df, family=pm.glm.families.NegativeBinomial())
# Old initialization
# start = pm.find_MAP(fmin=optimize.fmin_powell)
# C = pm.approx_hessian(start)
# trace = pm.sample(4000, step=pm.NUTS(scaling=C))
trace = pm.sample(2000, njobs=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using ADVI...
Average Loss = 10,338: 7%|▋ | 13828/200000 [00:22<06:19, 490.54it/s]
Convergence archived at 13900
Interrupted at 13,900 [6%]: Average Loss = 14,046
100%|██████████| 2500/2500 [11:04<00:00, 2.93it/s]
###Markdown
View Results
###Code
rvs = [rv.name for rv in strip_derived_rvs(model.unobserved_RVs)]
plot_traces(trace[1000:], varnames=rvs);
# Transform coefficients to recover parameter values
np.exp(pm.df_summary(trace[1000:], varnames=rvs)[['mean','hpd_2.5','hpd_97.5']])
###Output
_____no_output_____
###Markdown
The mean values are close to the values we specified when generating the data:- The base rate is a constant 1.- Drinking alcohol triples the base rate.- Not taking antihistamines increases the base rate by 6 times.- Drinking alcohol and not taking antihistamines doubles the rate that would be expected if their rates were independent. If they were independent, then doing both would increase the base rate by 3\*6=18 times, but instead the base rate is increased by 3\*6\*2=16 times.Finally, even though the sample for `mu` is highly skewed, its median value is close to the sample mean, and the mean of `alpha` is also quite close to its actual value of 10.
###Code
np.percentile(trace[1000:]['mu'], [25,50,75])
df.nsneeze.mean()
trace[1000:]['alpha'].mean()
###Output
_____no_output_____
###Markdown
GLM: Negative Binomial Regression
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import pymc3 as pm
from scipy import stats
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
import seaborn as sns
import re
print('Running on PyMC3 v{}'.format(pm.__version__))
###Output
Running on PyMC3 v3.4.1
###Markdown
This notebook demos negative binomial regression using the `glm` submodule. It closely follows the GLM Poisson regression example by [Jonathan Sedar](https://github.com/jonsedar) (which is in turn inspired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/)) except the data here is negative binomially distributed instead of Poisson distributed.Negative binomial regression is used to model count data for which the variance is higher than the mean. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) can be thought of as a Poisson distribution whose rate parameter is gamma distributed, so that rate parameter can be adjusted to account for the increased variance. Convenience FunctionsTaken from the Poisson regression example.
###Code
def plot_traces(trcs, varnames=None):
'''Plot traces with overlaid means and values'''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4),
lines={k: v['mean'] for k, v in
pm.summary(trcs,varnames=varnames).iterrows()})
for i, mn in enumerate(pm.summary(trcs, varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data',
xytext=(5,10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
def strip_derived_rvs(rvs):
'''Remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
ret_rvs.append(rv)
return ret_rvs
###Output
_____no_output_____
###Markdown
Generate DataAs in the Poisson regression example, we assume that sneezing occurs at some baseline rate, and that consuming alcohol, not taking antihistamines, or doing both, increase its frequency. Poisson DataFirst, let's look at some Poisson distributed data from the Poisson regression example.
###Code
np.random.seed(123)
# Mean Poisson values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# Create samples
q = 1000
df_pois = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df_pois.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
Since the mean and variance of a Poisson distributed random variable are equal, the sample means and variances are very close. Negative Binomial DataNow, suppose every subject in the dataset had the flu, increasing the variance of their sneezing (and causing an unfortunate few to sneeze over 70 times a day). If the mean number of sneezes stays the same but variance increases, the data might follow a negative binomial distribution.
###Code
# Gamma shape parameter
alpha = 10
def get_nb_vals(mu, alpha, size):
"""Generate negative binomially distributed samples by
drawing a sample from a gamma distribution with mean `mu` and
shape parameter `alpha', then drawing from a Poisson
distribution whose rate parameter is given by the sampled
gamma variable.
"""
g = stats.gamma.rvs(alpha, scale=mu / alpha, size=size)
return stats.poisson.rvs(g)
# Create samples
n = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((get_nb_vals(theta_noalcohol_meds, alpha, n),
get_nb_vals(theta_alcohol_meds, alpha, n),
get_nb_vals(theta_noalcohol_nomeds, alpha, n),
get_nb_vals(theta_alcohol_nomeds, alpha, n))),
'alcohol': np.concatenate((np.repeat(False, n),
np.repeat(True, n),
np.repeat(False, n),
np.repeat(True, n))),
'nomeds': np.concatenate((np.repeat(False, n),
np.repeat(False, n),
np.repeat(True, n),
np.repeat(True, n)))})
df.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
As in the Poisson regression example, we see that drinking alcohol and/or not taking antihistamines increase the sneezing rate to varying degrees. Unlike in that example, for each combination of `alcohol` and `nomeds`, the variance of `nsneeze` is higher than the mean. This suggests that a Poisson distrubution would be a poor fit for the data since the mean and variance of a Poisson distribution are equal. Visualize the Data
###Code
g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df, kind='count', aspect=1.5)
# Make x-axis ticklabels less crowded
ax = g.axes[1, 0]
labels = range(len(ax.get_xticklabels(which='both')))
ax.set_xticks(labels[::5])
ax.set_xticklabels(labels[::5]);
###Output
_____no_output_____
###Markdown
Negative Binomial Regression Create GLM Model
###Code
fml = 'nsneeze ~ alcohol + nomeds + alcohol:nomeds'
with pm.Model() as model:
pm.glm.GLM.from_formula(formula=fml, data=df, family=pm.glm.families.NegativeBinomial())
# Old initialization
# start = pm.find_MAP(fmin=optimize.fmin_powell)
# C = pm.approx_hessian(start)
# trace = pm.sample(4000, step=pm.NUTS(scaling=C))
trace = pm.sample(1000, tune=2000, cores=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [alpha, mu, alcohol[T.True]:nomeds[T.True], nomeds[T.True], alcohol[T.True], Intercept]
Sampling 2 chains: 100%|██████████| 6000/6000 [03:15<00:00, 30.72draws/s]
###Markdown
View Results
###Code
rvs = [rv.name for rv in strip_derived_rvs(model.unobserved_RVs)]
plot_traces(trace, varnames=rvs);
# Transform coefficients to recover parameter values
np.exp(pm.summary(trace, varnames=rvs)[['mean','hpd_2.5','hpd_97.5']])
###Output
_____no_output_____
###Markdown
The mean values are close to the values we specified when generating the data:- The base rate is a constant 1.- Drinking alcohol triples the base rate.- Not taking antihistamines increases the base rate by 6 times.- Drinking alcohol and not taking antihistamines doubles the rate that would be expected if their rates were independent. If they were independent, then doing both would increase the base rate by 3\*6=18 times, but instead the base rate is increased by 3\*6\*2=16 times.Finally, even though the sample for `mu` is highly skewed, its median value is close to the sample mean, and the mean of `alpha` is also quite close to its actual value of 10.
###Code
np.percentile(trace['mu'], [25,50,75])
df.nsneeze.mean()
trace['alpha'].mean()
###Output
_____no_output_____
###Markdown
GLM: Negative Binomial Regression
###Code
import re
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
from scipy import stats
print(f'Running on PyMC3 v{pm.__version__}')
%config InlineBackend.figure_format = 'retina'
az.style.use('arviz-darkgrid')
###Output
_____no_output_____
###Markdown
This notebook demos negative binomial regression using the `glm` submodule. It closely follows the GLM Poisson regression example by [Jonathan Sedar](https://github.com/jonsedar) (which is in turn inspired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/)) except the data here is negative binomially distributed instead of Poisson distributed.Negative binomial regression is used to model count data for which the variance is higher than the mean. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) can be thought of as a Poisson distribution whose rate parameter is gamma distributed, so that rate parameter can be adjusted to account for the increased variance. Convenience FunctionsTaken from the Poisson regression example.
###Code
def plot_traces(trcs, varnames=None):
'''Plot traces with overlaid means and values'''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, var_names=varnames, figsize=(12, nrows*1.4),
lines=tuple([(k, {}, v['mean'])
for k, v in pm.summary(trcs, varnames=varnames).iterrows()]))
for i, mn in enumerate(pm.summary(trcs, varnames=varnames)['mean']):
ax[i, 0].annotate(f'{mn:.2f}', xy=(mn, 0), xycoords='data',
xytext=(5, 10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
def strip_derived_rvs(rvs):
'''Remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log', rv.name) or re.search('_interval', rv.name)):
ret_rvs.append(rv)
return ret_rvs
###Output
_____no_output_____
###Markdown
Generate DataAs in the Poisson regression example, we assume that sneezing occurs at some baseline rate, and that consuming alcohol, not taking antihistamines, or doing both, increase its frequency. Poisson DataFirst, let's look at some Poisson distributed data from the Poisson regression example.
###Code
np.random.seed(123)
# Mean Poisson values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# Create samples
q = 1000
df_pois = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df_pois.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
Since the mean and variance of a Poisson distributed random variable are equal, the sample means and variances are very close. Negative Binomial DataNow, suppose every subject in the dataset had the flu, increasing the variance of their sneezing (and causing an unfortunate few to sneeze over 70 times a day). If the mean number of sneezes stays the same but variance increases, the data might follow a negative binomial distribution.
###Code
# Gamma shape parameter
alpha = 10
def get_nb_vals(mu, alpha, size):
"""Generate negative binomially distributed samples by
drawing a sample from a gamma distribution with mean `mu` and
shape parameter `alpha', then drawing from a Poisson
distribution whose rate parameter is given by the sampled
gamma variable.
"""
g = stats.gamma.rvs(alpha, scale=mu / alpha, size=size)
return stats.poisson.rvs(g)
# Create samples
n = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((get_nb_vals(theta_noalcohol_meds, alpha, n),
get_nb_vals(theta_alcohol_meds, alpha, n),
get_nb_vals(theta_noalcohol_nomeds, alpha, n),
get_nb_vals(theta_alcohol_nomeds, alpha, n))),
'alcohol': np.concatenate((np.repeat(False, n),
np.repeat(True, n),
np.repeat(False, n),
np.repeat(True, n))),
'nomeds': np.concatenate((np.repeat(False, n),
np.repeat(False, n),
np.repeat(True, n),
np.repeat(True, n)))})
df.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
As in the Poisson regression example, we see that drinking alcohol and/or not taking antihistamines increase the sneezing rate to varying degrees. Unlike in that example, for each combination of `alcohol` and `nomeds`, the variance of `nsneeze` is higher than the mean. This suggests that a Poisson distrubution would be a poor fit for the data since the mean and variance of a Poisson distribution are equal. Visualize the Data
###Code
g = sns.catplot(x='nsneeze', row='nomeds', col='alcohol', data=df, kind='count', aspect=1.5)
# Make x-axis ticklabels less crowded
ax = g.axes[1, 0]
labels = range(len(ax.get_xticklabels(which='both')))
ax.set_xticks(labels[::5])
ax.set_xticklabels(labels[::5]);
###Output
_____no_output_____
###Markdown
Negative Binomial Regression Create GLM Model
###Code
fml = 'nsneeze ~ alcohol + nomeds + alcohol:nomeds'
with pm.Model() as model:
pm.glm.GLM.from_formula(formula=fml, data=df, family=pm.glm.families.NegativeBinomial())
trace = pm.sample(1000, tune=1000, cores=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [alpha, mu, alcohol[T.True]:nomeds[T.True], nomeds[T.True], alcohol[T.True], Intercept]
Sampling 2 chains: 100%|██████████| 4000/4000 [01:08<00:00, 58.46draws/s]
The number of effective samples is smaller than 25% for some parameters.
###Markdown
View Results
###Code
varnames = [rv.name for rv in strip_derived_rvs(model.unobserved_RVs)]
plot_traces(trace, varnames=varnames);
# Transform coefficients to recover parameter values
np.exp(pm.summary(trace, varnames=varnames)[['mean','hpd_2.5','hpd_97.5']])
###Output
_____no_output_____
###Markdown
The mean values are close to the values we specified when generating the data:- The base rate is a constant 1.- Drinking alcohol triples the base rate.- Not taking antihistamines increases the base rate by 6 times.- Drinking alcohol and not taking antihistamines doubles the rate that would be expected if their rates were independent. If they were independent, then doing both would increase the base rate by 3\*6=18 times, but instead the base rate is increased by 3\*6\*2=16 times.Finally, even though the sample for `mu` is highly skewed, its median value is close to the sample mean, and the mean of `alpha` is also quite close to its actual value of 10.
###Code
np.percentile(trace['mu'], [25,50,75])
df.nsneeze.mean()
trace['alpha'].mean()
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
pymc3 3.8
arviz 0.8.3
numpy 1.17.5
last updated: Thu Jun 11 2020
CPython 3.8.2
IPython 7.11.0
watermark 2.0.2
###Markdown
GLM: Negative Binomial Regression
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import pymc3 as pm
from scipy import stats
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
import seaborn as sns
import re
print('Running on PyMC3 v{}'.format(pm.__version__))
###Output
Running on PyMC3 v3.6
###Markdown
This notebook demos negative binomial regression using the `glm` submodule. It closely follows the GLM Poisson regression example by [Jonathan Sedar](https://github.com/jonsedar) (which is in turn inspired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/)) except the data here is negative binomially distributed instead of Poisson distributed.Negative binomial regression is used to model count data for which the variance is higher than the mean. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) can be thought of as a Poisson distribution whose rate parameter is gamma distributed, so that rate parameter can be adjusted to account for the increased variance. Convenience FunctionsTaken from the Poisson regression example.
###Code
def plot_traces(trcs, varnames=None):
'''Plot traces with overlaid means and values'''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, var_names=varnames, figsize=(12, nrows*1.4),
lines=tuple([(k, {}, v['mean'])
for k, v in pm.summary(trcs, varnames=varnames).iterrows()]))
for i, mn in enumerate(pm.summary(trcs, varnames=varnames)['mean']):
ax[i, 0].annotate('{:.2f}'.format(mn), xy=(mn, 0), xycoords='data',
xytext=(5, 10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
def strip_derived_rvs(rvs):
'''Remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log', rv.name) or re.search('_interval', rv.name)):
ret_rvs.append(rv)
return ret_rvs
###Output
_____no_output_____
###Markdown
Generate DataAs in the Poisson regression example, we assume that sneezing occurs at some baseline rate, and that consuming alcohol, not taking antihistamines, or doing both, increase its frequency. Poisson DataFirst, let's look at some Poisson distributed data from the Poisson regression example.
###Code
np.random.seed(123)
# Mean Poisson values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# Create samples
q = 1000
df_pois = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df_pois.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
Since the mean and variance of a Poisson distributed random variable are equal, the sample means and variances are very close. Negative Binomial DataNow, suppose every subject in the dataset had the flu, increasing the variance of their sneezing (and causing an unfortunate few to sneeze over 70 times a day). If the mean number of sneezes stays the same but variance increases, the data might follow a negative binomial distribution.
###Code
# Gamma shape parameter
alpha = 10
def get_nb_vals(mu, alpha, size):
"""Generate negative binomially distributed samples by
drawing a sample from a gamma distribution with mean `mu` and
shape parameter `alpha', then drawing from a Poisson
distribution whose rate parameter is given by the sampled
gamma variable.
"""
g = stats.gamma.rvs(alpha, scale=mu / alpha, size=size)
return stats.poisson.rvs(g)
# Create samples
n = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((get_nb_vals(theta_noalcohol_meds, alpha, n),
get_nb_vals(theta_alcohol_meds, alpha, n),
get_nb_vals(theta_noalcohol_nomeds, alpha, n),
get_nb_vals(theta_alcohol_nomeds, alpha, n))),
'alcohol': np.concatenate((np.repeat(False, n),
np.repeat(True, n),
np.repeat(False, n),
np.repeat(True, n))),
'nomeds': np.concatenate((np.repeat(False, n),
np.repeat(False, n),
np.repeat(True, n),
np.repeat(True, n)))})
df.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
As in the Poisson regression example, we see that drinking alcohol and/or not taking antihistamines increase the sneezing rate to varying degrees. Unlike in that example, for each combination of `alcohol` and `nomeds`, the variance of `nsneeze` is higher than the mean. This suggests that a Poisson distrubution would be a poor fit for the data since the mean and variance of a Poisson distribution are equal. Visualize the Data
###Code
g = sns.catplot(x='nsneeze', row='nomeds', col='alcohol', data=df, kind='count', aspect=1.5)
# Make x-axis ticklabels less crowded
ax = g.axes[1, 0]
labels = range(len(ax.get_xticklabels(which='both')))
ax.set_xticks(labels[::5])
ax.set_xticklabels(labels[::5]);
###Output
_____no_output_____
###Markdown
Negative Binomial Regression Create GLM Model
###Code
fml = 'nsneeze ~ alcohol + nomeds + alcohol:nomeds'
with pm.Model() as model:
pm.glm.GLM.from_formula(formula=fml, data=df, family=pm.glm.families.NegativeBinomial())
trace = pm.sample(1000, tune=1000, cores=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [alpha, mu, alcohol[T.True]:nomeds[T.True], nomeds[T.True], alcohol[T.True], Intercept]
Sampling 2 chains: 100%|██████████| 4000/4000 [01:08<00:00, 58.46draws/s]
The number of effective samples is smaller than 25% for some parameters.
###Markdown
View Results
###Code
varnames = [rv.name for rv in strip_derived_rvs(model.unobserved_RVs)]
plot_traces(trace, varnames=varnames);
# Transform coefficients to recover parameter values
np.exp(pm.summary(trace, varnames=varnames)[['mean','hpd_2.5','hpd_97.5']])
###Output
_____no_output_____
###Markdown
The mean values are close to the values we specified when generating the data:- The base rate is a constant 1.- Drinking alcohol triples the base rate.- Not taking antihistamines increases the base rate by 6 times.- Drinking alcohol and not taking antihistamines doubles the rate that would be expected if their rates were independent. If they were independent, then doing both would increase the base rate by 3\*6=18 times, but instead the base rate is increased by 3\*6\*2=16 times.Finally, even though the sample for `mu` is highly skewed, its median value is close to the sample mean, and the mean of `alpha` is also quite close to its actual value of 10.
###Code
np.percentile(trace['mu'], [25,50,75])
df.nsneeze.mean()
trace['alpha'].mean()
###Output
_____no_output_____
###Markdown
GLM: Negative Binomial Regression
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import pymc3 as pm
from scipy import stats
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
import seaborn as sns
import re
print('Running on PyMC3 v{}'.format(pm.__version__))
###Output
/home/osvaldo/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
This notebook demos negative binomial regression using the `glm` submodule. It closely follows the GLM Poisson regression example by [Jonathan Sedar](https://github.com/jonsedar) (which is in turn inspired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/)) except the data here is negative binomially distributed instead of Poisson distributed.Negative binomial regression is used to model count data for which the variance is higher than the mean. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) can be thought of as a Poisson distribution whose rate parameter is gamma distributed, so that rate parameter can be adjusted to account for the increased variance. Convenience FunctionsTaken from the Poisson regression example.
###Code
def plot_traces(trcs, varnames=None):
'''Plot traces with overlaid means and values'''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4),
lines={k: v['mean'] for k, v in
pm.summary(trcs,varnames=varnames).iterrows()})
for i, mn in enumerate(pm.summary(trcs, varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data',
xytext=(5,10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
def strip_derived_rvs(rvs):
'''Remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
ret_rvs.append(rv)
return ret_rvs
###Output
_____no_output_____
###Markdown
Generate DataAs in the Poisson regression example, we assume that sneezing occurs at some baseline rate, and that consuming alcohol, not taking antihistamines, or doing both, increase its frequency. Poisson DataFirst, let's look at some Poisson distributed data from the Poisson regression example.
###Code
np.random.seed(123)
# Mean Poisson values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# Create samples
q = 1000
df_pois = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df_pois.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
Since the mean and variance of a Poisson distributed random variable are equal, the sample means and variances are very close. Negative Binomial DataNow, suppose every subject in the dataset had the flu, increasing the variance of their sneezing (and causing an unfortunate few to sneeze over 70 times a day). If the mean number of sneezes stays the same but variance increases, the data might follow a negative binomial distribution.
###Code
# Gamma shape parameter
alpha = 10
def get_nb_vals(mu, alpha, size):
"""Generate negative binomially distributed samples by
drawing a sample from a gamma distribution with mean `mu` and
shape parameter `alpha', then drawing from a Poisson
distribution whose rate parameter is given by the sampled
gamma variable.
"""
g = stats.gamma.rvs(alpha, scale=mu / alpha, size=size)
return stats.poisson.rvs(g)
# Create samples
n = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((get_nb_vals(theta_noalcohol_meds, alpha, n),
get_nb_vals(theta_alcohol_meds, alpha, n),
get_nb_vals(theta_noalcohol_nomeds, alpha, n),
get_nb_vals(theta_alcohol_nomeds, alpha, n))),
'alcohol': np.concatenate((np.repeat(False, n),
np.repeat(True, n),
np.repeat(False, n),
np.repeat(True, n))),
'nomeds': np.concatenate((np.repeat(False, n),
np.repeat(False, n),
np.repeat(True, n),
np.repeat(True, n)))})
df.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
As in the Poisson regression example, we see that drinking alcohol and/or not taking antihistamines increase the sneezing rate to varying degrees. Unlike in that example, for each combination of `alcohol` and `nomeds`, the variance of `nsneeze` is higher than the mean. This suggests that a Poisson distrubution would be a poor fit for the data since the mean and variance of a Poisson distribution are equal. Visualize the Data
###Code
g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df, kind='count', aspect=1.5)
# Make x-axis ticklabels less crowded
ax = g.axes[1, 0]
labels = range(len(ax.get_xticklabels(which='both')))
ax.set_xticks(labels[::5])
ax.set_xticklabels(labels[::5]);
###Output
_____no_output_____
###Markdown
Negative Binomial Regression Create GLM Model
###Code
fml = 'nsneeze ~ alcohol + nomeds + alcohol:nomeds'
with pm.Model() as model:
pm.glm.GLM.from_formula(formula=fml, data=df, family=pm.glm.families.NegativeBinomial())
# Old initialization
# start = pm.find_MAP(fmin=optimize.fmin_powell)
# C = pm.approx_hessian(start)
# trace = pm.sample(4000, step=pm.NUTS(scaling=C))
trace = pm.sample(2000, cores=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using ADVI...
Average Loss = 10,338: 7%|▋ | 13828/200000 [00:22<06:19, 490.54it/s]
Convergence archived at 13900
Interrupted at 13,900 [6%]: Average Loss = 14,046
100%|██████████| 2500/2500 [11:04<00:00, 2.93it/s]
###Markdown
View Results
###Code
rvs = [rv.name for rv in strip_derived_rvs(model.unobserved_RVs)]
plot_traces(trace[1000:], varnames=rvs);
# Transform coefficients to recover parameter values
np.exp(pm.summary(trace[1000:], varnames=rvs)[['mean','hpd_2.5','hpd_97.5']])
###Output
_____no_output_____
###Markdown
The mean values are close to the values we specified when generating the data:- The base rate is a constant 1.- Drinking alcohol triples the base rate.- Not taking antihistamines increases the base rate by 6 times.- Drinking alcohol and not taking antihistamines doubles the rate that would be expected if their rates were independent. If they were independent, then doing both would increase the base rate by 3\*6=18 times, but instead the base rate is increased by 3\*6\*2=16 times.Finally, even though the sample for `mu` is highly skewed, its median value is close to the sample mean, and the mean of `alpha` is also quite close to its actual value of 10.
###Code
np.percentile(trace[1000:]['mu'], [25,50,75])
df.nsneeze.mean()
trace[1000:]['alpha'].mean()
###Output
_____no_output_____
###Markdown
GLM: Negative Binomial RegressionThis notebook demos negative binomial regression using the `glm` submodule. It closely follows the GLM Poisson regression example by [Jonathan Sedar](https://github.com/jonsedar) (which is in turn insipired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/)) except the data here is negative binomially distributed instead of Poisson distributed.Negative binomial regression is used to model count data for which the variance is higher than the mean. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) can be thought of as a Poisson distribution whose rate parameter is gamma distributed, so that rate parameter can be adjusted to account for the increased variance. Contents+ [Setup](Setup) + [Convenience Functions](Convenience-Functions) + [Generate Data](Generate-Data) + [Poisson Data](Poisson-Data) + [Negative Binomial Data](Negative-Binomial-Data) + [Visualize the Data](Visualize-the-Data)+ [Negative Binomial Regression](Negative-Binomial-Regression) + [Create GLM Model](Create-GLM-Model) + [View Results](View-Results) Setup
###Code
import numpy as np
import pandas as pd
import pymc3 as pm
from scipy import stats
from scipy import optimize
import matplotlib.pyplot as plt
import seaborn as sns
import re
%matplotlib inline
###Output
_____no_output_____
###Markdown
Convenience Functions (Taken from the Poisson regression example)
###Code
def plot_traces(trcs, varnames=None):
'''Plot traces with overlaid means and values'''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4),
lines={k: v['mean'] for k, v in
pm.df_summary(trcs,varnames=varnames).iterrows()})
for i, mn in enumerate(pm.df_summary(trcs, varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data',
xytext=(5,10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
def strip_derived_rvs(rvs):
'''Remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
ret_rvs.append(rv)
return ret_rvs
###Output
_____no_output_____
###Markdown
Generate DataAs in the Poisson regression example, we assume that sneezing occurs at some baseline rate, and that consuming alcohol, not taking antihistamines, or doing both, increase its frequency. Poisson DataFirst, let's look at some Poisson distributed data from the Poisson regression example.
###Code
np.random.seed(123)
# Mean Poisson values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# Create samples
q = 1000
df_pois = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df_pois.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
Since the mean and variance of a Poisson distributed random variable are equal, the sample means and variances are very close. Negative Binomial DataNow, suppose every subject in the dataset had the flu, increasing the variance of their sneezing (and causing an unfortunate few to sneeze over 70 times a day). If the mean number of sneezes stays the same but variance increases, the data might follow a negative binomial distribution.
###Code
# Gamma shape parameter
alpha = 10
def get_nb_vals(mu, alpha, size):
"""Generate negative binomially distributed samples by
drawing a sample from a gamma distribution with mean `mu` and
shape parameter `alpha', then drawing from a Poisson
distribution whose rate parameter is given by the sampled
gamma variable.
"""
g = stats.gamma.rvs(alpha, scale=mu / alpha, size=size)
return stats.poisson.rvs(g)
# Create samples
n = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((get_nb_vals(theta_noalcohol_meds, alpha, n),
get_nb_vals(theta_alcohol_meds, alpha, n),
get_nb_vals(theta_noalcohol_nomeds, alpha, n),
get_nb_vals(theta_alcohol_nomeds, alpha, n))),
'alcohol': np.concatenate((np.repeat(False, n),
np.repeat(True, n),
np.repeat(False, n),
np.repeat(True, n))),
'nomeds': np.concatenate((np.repeat(False, n),
np.repeat(False, n),
np.repeat(True, n),
np.repeat(True, n)))})
df.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
As in the Poisson regression example, we see that drinking alcohol and/or not taking antihistamines increase the sneezing rate to varying degrees. Unlike in that example, for each combination of `alcohol` and `nomeds`, the variance of `nsneeze` is higher than the mean. This suggests that a Poisson distrubution would be a poor fit for the data since the mean and variance of a Poisson distribution are equal. Visualize the Data
###Code
g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df, kind='count', aspect=1.5)
# Make x-axis ticklabels less crowded
ax = g.axes[1, 0]
labels = range(len(ax.get_xticklabels(which='both')))
ax.set_xticks(labels[::5])
ax.set_xticklabels(labels[::5]);
###Output
_____no_output_____
###Markdown
Negative Binomial Regression Create GLM Model
###Code
fml = 'nsneeze ~ alcohol + nomeds + alcohol:nomeds'
with pm.Model() as model:
pm.glm.glm(formula=fml, data=df, family=pm.glm.families.NegativeBinomial())
# This initialization seems to improve mixing
start = pm.find_MAP(fmin=optimize.fmin_powell)
C = pm.approx_hessian(start)
trace = pm.sample(4000, step=pm.NUTS(scaling=C))
###Output
Optimization terminated successfully.
Current function value: 9825.951700
Iterations: 16
Function evaluations: 1086
###Markdown
View Results
###Code
rvs = [rv.name for rv in strip_derived_rvs(model.unobserved_RVs)]
plot_traces(trace[1000:], varnames=rvs);
# Transform coefficients to recover parameter values
np.exp(pm.df_summary(trace[1000:], varnames=rvs)[['mean','hpd_2.5','hpd_97.5']])
###Output
_____no_output_____
###Markdown
The mean values are close to the values we specified when generating the data:- The base rate is a constant 1.- Drinking alcohol triples the base rate.- Not taking antihistamines increases the base rate by 6 times.- Drinking alcohol and not taking antihistamines doubles the rate that would be expected if their rates were independent. If they were independent, then doing both would increase the base rate by 3\*6=18 times, but instead the base rate is increased by 3\*6\*2=16 times.Finally, even though the sample for `mu` is highly skewed, its median value is close to the sample mean, and the mean of `alpha` is also quite close to its actual value of 10.
###Code
np.percentile(trace[1000:]['mu'], [25,50,75])
df.nsneeze.mean()
trace[1000:]['alpha'].mean()
###Output
_____no_output_____
###Markdown
GLM: Negative Binomial Regression
###Code
import re
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import seaborn as sns
from scipy import stats
print(f"Running on PyMC3 v{pm.__version__}")
RANDOM_SEED = 8927
np.random.seed(RANDOM_SEED)
az.style.use("arviz-darkgrid")
###Output
_____no_output_____
###Markdown
This notebook demos negative binomial regression using the `glm` submodule. It closely follows the GLM Poisson regression example by [Jonathan Sedar](https://github.com/jonsedar) (which is in turn inspired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/)) except the data here is negative binomially distributed instead of Poisson distributed.Negative binomial regression is used to model count data for which the variance is higher than the mean. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) can be thought of as a Poisson distribution whose rate parameter is gamma distributed, so that rate parameter can be adjusted to account for the increased variance. Generate DataAs in the Poisson regression example, we assume that sneezing occurs at some baseline rate, and that consuming alcohol, not taking antihistamines, or doing both, increase its frequency. Poisson DataFirst, let's look at some Poisson distributed data from the Poisson regression example.
###Code
# Mean Poisson values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# Create samples
q = 1000
df_pois = pd.DataFrame(
{
"nsneeze": np.concatenate(
(
np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q),
)
),
"alcohol": np.concatenate(
(
np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q),
)
),
"nomeds": np.concatenate(
(
np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q),
)
),
}
)
df_pois.groupby(["nomeds", "alcohol"])["nsneeze"].agg(["mean", "var"])
###Output
_____no_output_____
###Markdown
Since the mean and variance of a Poisson distributed random variable are equal, the sample means and variances are very close. Negative Binomial DataNow, suppose every subject in the dataset had the flu, increasing the variance of their sneezing (and causing an unfortunate few to sneeze over 70 times a day). If the mean number of sneezes stays the same but variance increases, the data might follow a negative binomial distribution.
###Code
# Gamma shape parameter
alpha = 10
def get_nb_vals(mu, alpha, size):
"""Generate negative binomially distributed samples by
drawing a sample from a gamma distribution with mean `mu` and
shape parameter `alpha', then drawing from a Poisson
distribution whose rate parameter is given by the sampled
gamma variable.
"""
g = stats.gamma.rvs(alpha, scale=mu / alpha, size=size)
return stats.poisson.rvs(g)
# Create samples
n = 1000
df = pd.DataFrame(
{
"nsneeze": np.concatenate(
(
get_nb_vals(theta_noalcohol_meds, alpha, n),
get_nb_vals(theta_alcohol_meds, alpha, n),
get_nb_vals(theta_noalcohol_nomeds, alpha, n),
get_nb_vals(theta_alcohol_nomeds, alpha, n),
)
),
"alcohol": np.concatenate(
(
np.repeat(False, n),
np.repeat(True, n),
np.repeat(False, n),
np.repeat(True, n),
)
),
"nomeds": np.concatenate(
(
np.repeat(False, n),
np.repeat(False, n),
np.repeat(True, n),
np.repeat(True, n),
)
),
}
)
df.groupby(["nomeds", "alcohol"])["nsneeze"].agg(["mean", "var"])
###Output
_____no_output_____
###Markdown
As in the Poisson regression example, we see that drinking alcohol and/or not taking antihistamines increase the sneezing rate to varying degrees. Unlike in that example, for each combination of `alcohol` and `nomeds`, the variance of `nsneeze` is higher than the mean. This suggests that a Poisson distribution would be a poor fit for the data since the mean and variance of a Poisson distribution are equal. Visualize the Data
###Code
g = sns.catplot(
x="nsneeze", row="nomeds", col="alcohol", data=df, kind="count", aspect=1.5
)
# Make x-axis ticklabels less crowded
ax = g.axes[1, 0]
labels = range(len(ax.get_xticklabels(which="both")))
ax.set_xticks(labels[::5])
ax.set_xticklabels(labels[::5]);
###Output
/home/amit/miniconda3/envs/pymc3/lib/python3.8/site-packages/seaborn/axisgrid.py:382: UserWarning: This figure was using constrained_layout==True, but that is incompatible with subplots_adjust and or tight_layout: setting constrained_layout==False.
fig.tight_layout()
###Markdown
Negative Binomial Regression Create GLM Model
###Code
fml = "nsneeze ~ alcohol + nomeds + alcohol:nomeds"
with pm.Model() as model:
pm.glm.GLM.from_formula(
formula=fml, data=df, family=pm.glm.families.NegativeBinomial()
)
trace = pm.sample(1000, tune=1000, cores=2, return_inferencedata=True)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [alpha, mu, alcohol[T.True]:nomeds[T.True], nomeds[T.True], alcohol[T.True], Intercept]
###Markdown
View Results
###Code
az.plot_trace(trace)
# Transform coefficients to recover parameter values
np.exp(az.summary(trace)[["mean", "hdi_3%", "hdi_97%"]])
###Output
_____no_output_____
###Markdown
The mean values are close to the values we specified when generating the data:- The base rate is a constant 1.- Drinking alcohol triples the base rate.- Not taking antihistamines increases the base rate by 6 times.- Drinking alcohol and not taking antihistamines doubles the rate that would be expected if their rates were independent. If they were independent, then doing both would increase the base rate by 3\*6=18 times, but instead the base rate is increased by 3\*6\*2=16 times.Finally, even though the sample for `mu` is highly skewed, its median value is close to the sample mean, and the mean of `alpha` is also quite close to its actual value of 10.
###Code
np.percentile(trace.posterior["mu"], [25, 50, 75])
df.nsneeze.mean()
trace.posterior["alpha"].mean()
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
seaborn 0.10.1
numpy 1.18.5
pymc3 3.9.3
pandas 1.0.5
re 2.2.1
arviz 0.9.0
last updated: Mon Oct 05 2020
CPython 3.8.3
IPython 7.16.1
watermark 2.0.2
###Markdown
GLM: Negative Binomial Regression
###Code
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import re
import seaborn as sns
from scipy import stats
print('Running on PyMC3 v{}'.format(pm.__version__))
%config InlineBackend.figure_format = 'retina'
az.style.use('arviz-darkgrid')
###Output
_____no_output_____
###Markdown
This notebook demos negative binomial regression using the `glm` submodule. It closely follows the GLM Poisson regression example by [Jonathan Sedar](https://github.com/jonsedar) (which is in turn inspired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/)) except the data here is negative binomially distributed instead of Poisson distributed.Negative binomial regression is used to model count data for which the variance is higher than the mean. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) can be thought of as a Poisson distribution whose rate parameter is gamma distributed, so that rate parameter can be adjusted to account for the increased variance. Convenience FunctionsTaken from the Poisson regression example.
###Code
def plot_traces(trcs, varnames=None):
'''Plot traces with overlaid means and values'''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, var_names=varnames, figsize=(12, nrows*1.4),
lines=tuple([(k, {}, v['mean'])
for k, v in pm.summary(trcs, varnames=varnames).iterrows()]))
for i, mn in enumerate(pm.summary(trcs, varnames=varnames)['mean']):
ax[i, 0].annotate('{:.2f}'.format(mn), xy=(mn, 0), xycoords='data',
xytext=(5, 10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
def strip_derived_rvs(rvs):
'''Remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log', rv.name) or re.search('_interval', rv.name)):
ret_rvs.append(rv)
return ret_rvs
###Output
_____no_output_____
###Markdown
Generate DataAs in the Poisson regression example, we assume that sneezing occurs at some baseline rate, and that consuming alcohol, not taking antihistamines, or doing both, increase its frequency. Poisson DataFirst, let's look at some Poisson distributed data from the Poisson regression example.
###Code
np.random.seed(123)
# Mean Poisson values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# Create samples
q = 1000
df_pois = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df_pois.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
Since the mean and variance of a Poisson distributed random variable are equal, the sample means and variances are very close. Negative Binomial DataNow, suppose every subject in the dataset had the flu, increasing the variance of their sneezing (and causing an unfortunate few to sneeze over 70 times a day). If the mean number of sneezes stays the same but variance increases, the data might follow a negative binomial distribution.
###Code
# Gamma shape parameter
alpha = 10
def get_nb_vals(mu, alpha, size):
"""Generate negative binomially distributed samples by
drawing a sample from a gamma distribution with mean `mu` and
shape parameter `alpha', then drawing from a Poisson
distribution whose rate parameter is given by the sampled
gamma variable.
"""
g = stats.gamma.rvs(alpha, scale=mu / alpha, size=size)
return stats.poisson.rvs(g)
# Create samples
n = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((get_nb_vals(theta_noalcohol_meds, alpha, n),
get_nb_vals(theta_alcohol_meds, alpha, n),
get_nb_vals(theta_noalcohol_nomeds, alpha, n),
get_nb_vals(theta_alcohol_nomeds, alpha, n))),
'alcohol': np.concatenate((np.repeat(False, n),
np.repeat(True, n),
np.repeat(False, n),
np.repeat(True, n))),
'nomeds': np.concatenate((np.repeat(False, n),
np.repeat(False, n),
np.repeat(True, n),
np.repeat(True, n)))})
df.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var'])
###Output
_____no_output_____
###Markdown
As in the Poisson regression example, we see that drinking alcohol and/or not taking antihistamines increase the sneezing rate to varying degrees. Unlike in that example, for each combination of `alcohol` and `nomeds`, the variance of `nsneeze` is higher than the mean. This suggests that a Poisson distrubution would be a poor fit for the data since the mean and variance of a Poisson distribution are equal. Visualize the Data
###Code
g = sns.catplot(x='nsneeze', row='nomeds', col='alcohol', data=df, kind='count', aspect=1.5)
# Make x-axis ticklabels less crowded
ax = g.axes[1, 0]
labels = range(len(ax.get_xticklabels(which='both')))
ax.set_xticks(labels[::5])
ax.set_xticklabels(labels[::5]);
###Output
_____no_output_____
###Markdown
Negative Binomial Regression Create GLM Model
###Code
fml = 'nsneeze ~ alcohol + nomeds + alcohol:nomeds'
with pm.Model() as model:
pm.glm.GLM.from_formula(formula=fml, data=df, family=pm.glm.families.NegativeBinomial())
trace = pm.sample(1000, tune=1000, cores=2)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [alpha, mu, alcohol[T.True]:nomeds[T.True], nomeds[T.True], alcohol[T.True], Intercept]
Sampling 2 chains: 100%|██████████| 4000/4000 [01:08<00:00, 58.46draws/s]
The number of effective samples is smaller than 25% for some parameters.
###Markdown
View Results
###Code
varnames = [rv.name for rv in strip_derived_rvs(model.unobserved_RVs)]
plot_traces(trace, varnames=varnames);
# Transform coefficients to recover parameter values
np.exp(pm.summary(trace, varnames=varnames)[['mean','hpd_2.5','hpd_97.5']])
###Output
_____no_output_____
###Markdown
The mean values are close to the values we specified when generating the data:- The base rate is a constant 1.- Drinking alcohol triples the base rate.- Not taking antihistamines increases the base rate by 6 times.- Drinking alcohol and not taking antihistamines doubles the rate that would be expected if their rates were independent. If they were independent, then doing both would increase the base rate by 3\*6=18 times, but instead the base rate is increased by 3\*6\*2=16 times.Finally, even though the sample for `mu` is highly skewed, its median value is close to the sample mean, and the mean of `alpha` is also quite close to its actual value of 10.
###Code
np.percentile(trace['mu'], [25,50,75])
df.nsneeze.mean()
trace['alpha'].mean()
%load_ext watermark
%watermark -n -u -v -iv -w
###Output
pymc3 3.8
arviz 0.8.3
numpy 1.17.5
last updated: Thu Jun 11 2020
CPython 3.8.2
IPython 7.11.0
watermark 2.0.2
|
002_Python_Functions_Built_in/046_Python_object().ipynb | ###Markdown
All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/04_Python_Functions/tree/main/002_Python_Functions_Built_in)** Python `object()`The **`object()`** function returns a featureless object which is a base for all classes.**Syntax**:```pythono = object()``` `object()` ParametersThe **`object()`** function doesn't accept any parameters. Return Value from `object()`The **`object()`** function function returns a featureless object.
###Code
# Example: How object() works?
test = object()
print(type(test))
print(dir(test))
###Output
<class 'object'>
['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__']
|
pandas/selection.ipynb | ###Markdown
Selection
After we create a `DataFrame`, we have several ways to select data from either **row-wise** or **column-wise**, either through **indexing** or **slicing**, and either by **label's name** or **position**.
I have concluded a table for selecting the data.
| | **Single Column** | **Multiple Columns** | **Continuous Columns** | **All Columns** |
| ------------------- | ------------------------------------------------ | ---------------------------------------------------------- | ----------------------------------- | ---------------- |
| **Single Row** | `df.loc[row, column]` or `df.at[row, column]` | `df.loc[row, [column, column]]` | `df.loc[row, column:column]` | `df.loc[row]` |
| **Multiple Rows** | `df.loc[[row, row], column]` | `df.loc[[row, row], [column, column]]` | `df.loc[[row, row], column:column]` | `df[[row, row]]` |
| **Continuous Rows** | `df.loc[row:row, column]` | `df.loc[row:row, [column, column]]` | `df.loc[row:row, column:column]` | `df[row:row]` |
| **All Rows** | `df[column]` | `df[[column, column]]` or `df.loc[:, [column, column]]` | `df.loc[:, column:column]` | `df` |
- `df.iloc` is same as `df.loc` but using position.
- `df.iat` is same as `df.at` but using position.
###Code
import numpy as np
import pandas as pd
df = pd.DataFrame(
np.arange(30).reshape(6, 5),
index=list("abcdef"),
columns=["col1", "col2", "col3", "col4", "col5"]
)
df
###Output
_____no_output_____
###Markdown
We will use the `DataFrame` above to demonstrate the techniques of `selection`. ---
Getting Directly
###Code
df["col1"] # same as `df.col1`
df[["col1", "col2"]]
df[0:3] # same as `df["a":"c"]`
###Output
_____no_output_____
###Markdown
---
Selection with `loc` and `at` method (select by label)
The first element in `loc` is the parameter of `row`, and the second one is of `column`.
There are three implementation for a parameter:
- single element (e.g., `"a"`)
- list (e.g., `["a", "c", "e"]`)
- slicing (e.g., `"a":"e"`)
###Code
df.loc["a":"d", ["col1", "col2"]]
df.loc["a", "col5"]
df.at["a", "col5"]
###Output
_____no_output_____
###Markdown
---
Selection with `iloc` and `iat` method (select by position)
`iloc` is almost the same as `loc` method but using positions as the index.
###Code
df.iloc[0:2, [0, 3, 4]]
df.iloc[1, 4]
df.iat[1, 4]
###Output
_____no_output_____
###Markdown
---
Boolean indexing
###Code
df[(df["col1"] > 18)] # Using a single column's values to select data.
df[(df > 6) & (df < 25)] # Selecting values from a DataFrame where a boolean condition is met.
df[df["col1"].isin([10, 15, 0])] # Using the isin() method for filtering.
###Output
_____no_output_____ |
examples/lidar.ipynb | ###Markdown
Visualize Lidar Scattered Point Elevation DataThis notebook uses datashader to visualize Lidar elevation data from [the Puget Sound Lidar consortium](http://pugetsoundlidar.ess.washington.edu/), a source of Lidar data for the Puget Sound region of Washington, U.S. SetupRun the `download_sample_data.py` script to download Lidar data from the S3 datashader examples budget. The script downloads data as a `.zip` and automatically unzips it to 25 three-column text files with the extension `.gnd`. and the zip[Puget Sound LiDAR consortium](http://pugetsoundlidar.ess.washington.edu) and other example data sets. From your local clone of the `datashader` repository:```cd examplesconda env createsource activate ds python download_sample_data.py```Note on Windows, replace `source activate ds` with `activate ds`. Lidar Elevation DataExample X,Y,Z scattered point elevation data from the unpacked 7zip files (unpacked as .gnd files) ```! head data/q47122d2101.gnd``````X,Y,Z1291149.60,181033.64,467.951291113.29,181032.53,460.241291065.38,181035.74,451.411291113.16,181037.32,455.511291116.68,181037.42,456.201291162.42,181038.90,467.811291111.90,181038.15,454.891291066.62,181036.73,451.411291019.10,181035.20,451.64```The Seattle area example below loads 25 `.gnd` elevation files like the one above.
###Code
import os
from bokeh.models import WMTSTileSource
from dask.distributed import Client
from holoviews.operation.datashader import datashade
from pyproj import Proj, transform
import dask
import dask.dataframe as dd
import geoviews as gv
import glob
import holoviews as hv
import pandas as pd
client = Client()
if not os.path.exists('data'):
raise ValueError('Run python download_sample_data.py from the examples directory first')
LIDAR_XYZ_FILES = glob.glob(os.path.join('data', '*.gnd'))
if not LIDAR_XYZ_FILES:
raise ValueError('Run python download_sample_data.py from the examples directory first')
LIDAR_XYZ_FILES[:2]
###Output
_____no_output_____
###Markdown
Coordinate System Metadata (for this example)*Grid_Coordinate_System_Name*: State Plane Coordinate System*State_Plane_Coordinate_System*: SPCS_Zone_Identifier Washington North, FIPS 4601*Lambert_Conformal_Conic*: * Standard_Parallel: 47.500000 * Standard_Parallel: 48.733333 * Longitude_of_Central_Meridian: -120.833333 * Latitude_of_Projection_Origin: 47.000000 * False_Easting: 1640416.666667 * False_Northing: 0.000000 http://www.spatialreference.org/ref/esri/102348/Washington State Plane North - FIPS 4601
###Code
washington_state_plane = Proj(init='epsg:2855') # Washington State Plane North (see metadata above)
web_mercator = Proj(init='epsg:3857') # Mercator projection EPSG code
FT_2_M = 0.3048
def convert_coords(ddf):
lon, lat = transform(washington_state_plane, web_mercator, ddf.X.values * FT_2_M, ddf.Y.values * FT_2_M)
ddf['meterswest'], ddf['metersnorth'] = lon, lat
ddf2 = ddf[['meterswest', 'metersnorth', 'Z']].copy()
del ddf
return ddf2
@dask.delayed
def read_gnd(fname):
return convert_coords(pd.read_csv(fname))
###Output
_____no_output_____
###Markdown
Use web_mercator (from above) to hard-code the bounding box
###Code
left, bottom = web_mercator(-122.32, 47.42)
right, top = web_mercator(-122.22, 47.52)
x_range, y_range = ((left, right), (bottom, top))
df = dd.from_delayed([read_gnd(f) for f in LIDAR_XYZ_FILES])
kdims=['meterswest', 'metersnorth',]
dataset = gv.Dataset(df, kdims=kdims, vdims=['Z'])
shade_defaults = dict(x_range=x_range, y_range=y_range, x_sampling=1, y_sampling=1, width=800, height=455)
tri = hv.Points(dataset, kdims=kdims, vdims=['Z'])
shaded = datashade(tri, **shade_defaults)
df.head()
###Output
_____no_output_____
###Markdown
Alternatively we could have done the following dask compute operations to get the bounds of the region:```minn, maxx = df.min().compute(), df.max().compute()left, bottom = map(float, (minn.meterswest, minn.metersnorth))right, top = map(float, (maxx.meterswest, maxx.metersnorth))```
###Code
hv.notebook_extension('bokeh', width=95)
%opts Overlay [width=800 height=800 xaxis=None yaxis=None show_grid=False]
%opts Shape (fill_color=None line_width=1.5) [apply_ranges=False]
%opts Points [apply_ranges=False] WMTS (alpha=0.5) NdOverlay [tools=['tap']]
tiles = gv.WMTS(WMTSTileSource(url='https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg'))
tiles * shaded
###Output
_____no_output_____
###Markdown
A tutorial for the lidar Python packageThis notebook demonstrates the usage of the **lidar** Python package for terrain and hydrological analysis. It is useful for analyzing high-resolution topographic data, such as digital elevation models (DEMs) derived from Light Detection and Ranging (LiDAR) data.* GitHub repo: https://github.com/giswqs/lidar* Documentation: https://lidar.readthedocs.io.* PyPI: https://pypi.org/project/lidar/* Binder: https://gishub.org/lidar-cloud* Free software: [MIT license](https://opensource.org/licenses/MIT)This tutorial can be accessed in three ways:* HTML version: https://gishub.org/lidar-html* Viewable Notebook: https://gishub.org/lidar-notebook* Interactive Notebook: https://gishub.org/lidar-cloud**Launch this tutorial as an interactive Jupyter Notebook on the cloud - [MyBinder.org](https://gishub.org/lidar-cloud).**![lidar-gif](https://i.imgur.com/aIttPVG.gif) Table of Content* [Installation](Installation)* [Getting data](Getting-data)* [Using lidar](Using-lidar)* [Displaying results](Displaying-results)* [lidar GUI](lidar-GUI)* [Citing lidar](Citing-lidar)* [Credits](Credits)* [Contact](Contact) InstallationThe **lidar** Python package supports a variety of platforms, including Microsoft Windows, macOS, and Linux operating systems. Note that you will need to have **Python 3.x** installed. Python 2.x is not supported. The **lidar** Python package can be installed using the following command:`pip install lidar`If you have installed **lidar** Python package before and want to upgrade to the latest version, you can use the following command:`pip install lidar -U`If you encounter any installation issues, please check [Dependencies](https://github.com/giswqs/lidardependencies) on the **lidar** GitHub page and [Report Bugs](https://github.com/giswqs/lidar/issues). Getting data This section demonstrates two ways to get data into Binder so that you can test the **lidar** Python package on the cloud using your own data. * [Getting data from direct URLs](Getting-data-from-direct-URLs) * [Getting data from Google Drive](Getting-data-from-Google-Drive) Getting data from direct URLsIf you have data hosted on your own HTTP server or GitHub, you should be able to get direct URLs. With a direct URL, users can automatically download the data when the URL is clicked. For example http://wetlands.io/file/data/lidar-dem.zip Import the following Python libraries and start getting data from direct URLs.
###Code
import os
import pygis
###Output
_____no_output_____
###Markdown
Create a folder named *data* and set it as the working directory.
###Code
root_dir = os.getcwd()
work_dir = os.path.join(root_dir, 'data')
if not os.path.exists(work_dir):
os.mkdir(work_dir)
print("Working directory: {}".format(os.path.realpath(work_dir)))
###Output
Working directory: /home/jovyan/examples/data
###Markdown
Replace the following URL with your own direct URL hosting the data you would like to use.
###Code
url = "https://github.com/giswqs/lidar/raw/master/examples/lidar-dem.zip"
###Output
_____no_output_____
###Markdown
Download data the from the above URL and unzip the file if needed.
###Code
pygis.download_from_url(url, out_dir=work_dir)
###Output
Downloading lidar-dem.zip ...
Downloading done.
Unzipping lidar-dem.zip ...
Unzipping done.
Data directory: /home/jovyan/examples/data/lidar-dem
###Markdown
You have successfully downloaded data to Binder. Therefore, you can skip to [Using lidar](Using-lidar) and start testing **lidar** with your own data. Getting data from Google DriveAlternatively, you can upload data to [Google Drive](https://www.google.com/drive/) and then [share files publicly from Google Drive](https://support.google.com/drive/answer/2494822?co=GENIE.Platform%3DDesktop&hl=en). Once the file is shared publicly, you should be able to get a shareable URL. For example, https://drive.google.com/file/d/1c6v-ep5-klb2J32Nuu1rSyqAc8kEtmdh. **Replace the following URL with your own shareable URL from Google Drive.**
###Code
gfile_url = 'https://drive.google.com/file/d/1c6v-ep5-klb2J32Nuu1rSyqAc8kEtmdh'
###Output
_____no_output_____
###Markdown
**Download the shared file from Google Drive.**
###Code
pygis.download_from_gdrive(gfile_url, file_name='lidar-dem.zip', out_dir=work_dir)
###Output
Google Drive file id: 1c6v-ep5-klb2J32Nuu1rSyqAc8kEtmdh
Downloading 1c6v-ep5-klb2J32Nuu1rSyqAc8kEtmdh into /home/jovyan/examples/data/lidar-dem.zip... Done.
Unzipping...Done.
###Markdown
You have successfully downloaded data from Google Drive to Binder. You can now continue to [Using lidar](Using-lidar) and start testing **lidar** with your own data. Using lidar Here you can specify where your data are located. In this example, we will use [dem.tif](https://github.com/giswqs/lidar/blob/master/examples/lidar-dem/dem.tif), which has been downloaded to the *lidar-dem* folder. **Import the lidar package.**
###Code
import lidar
###Output
_____no_output_____
###Markdown
**List data under the data folder.**
###Code
data_dir = './data/lidar-dem/'
print(os.listdir(data_dir))
###Output
['dsm.tif', 'sink.tif', 'dem.tif']
###Markdown
**Create a temporary folder to save results.**
###Code
out_dir = os.path.join(os.getcwd(), "temp")
if not os.path.exists(out_dir):
os.mkdir(out_dir)
###Output
_____no_output_____
###Markdown
In this simple example, we smooth [dem.tif](https://github.com/giswqs/lidar/blob/master/examples/lidar-dem/dem.tif) using a median filter. Then we extract sinks (i.e., depressions) from the DEM. Finally, we delineate nested depression hierarchy using the [level-set algorithm](https://doi.org/10.1111/1752-1688.12689). **Set parameters for the level-set algorithm.**
###Code
min_size = 1000 # minimum number of pixels as a depression
min_depth = 0.3 # minimum depth as a depression
interval = 0.3 # slicing interval for the level-set method
bool_shp = False # output shapefiles for each individual level
###Output
_____no_output_____
###Markdown
**Smooth the original DEM using a median filter.**
###Code
# extracting sinks based on user-defined minimum depression size
in_dem = os.path.join(data_dir, 'dem.tif')
out_dem = os.path.join(out_dir, "median.tif")
in_dem = lidar.MedianFilter(in_dem, kernel_size=3, out_file=out_dem)
###Output
Median filtering ...
Run time: 0.0258 seconds
Saving dem ...
###Markdown
**Extract DEM sinks using a depression-filling algorithm.**
###Code
sink = lidar.ExtractSinks(in_dem, min_size, out_dir)
###Output
Loading data ...
min = 379.70, max = 410.72, no_data = -3.402823e+38, cell_size = 1.0
Depression filling ...
Saving filled dem ...
Region grouping ...
Computing properties ...
Saving sink dem ...
Saving refined dem ...
Converting raster to vector ...
Total run time: 0.0723 s
###Markdown
**Identify depression nested hierarchy using the level-set algorithm.**
###Code
dep_id, dep_level = lidar.DelineateDepressions(sink, min_size, min_depth, interval, out_dir, bool_shp)
###Output
Reading data ...
rows, cols: (400, 400)
Pixel resolution: 1.0
Read data time: 0.0036 seconds
Data preparation time: 0.0737 seconds
Total number of regions: 1
Processing Region # 1 ...
=========== Run time statistics ===========
(rows, cols): (400, 400)
Pixel resolution: 1.0 m
Number of regions: 1
Data preparation time: 0.0737 s
Identify level time: 0.5304 s
Write image time: 0.0057 s
Polygonize time: 0.0130 s
Total run time: 0.6238 s
###Markdown
**Print the list of output files.**
###Code
print('Results are saved in: {}'.format(out_dir))
print(os.listdir(out_dir))
###Output
Results are saved in: /home/jovyan/examples/temp
['depression_id.tif', 'depressions.shp', 'regions.shx', 'region.tif', 'sink.tif', 'depressions.dbf', 'depression_level.tif', 'regions.shp', 'depressions_info.csv', 'regions.prj', 'depressions.prj', 'dem.tif', 'depth.tif', 'depressions.shx', 'regions.dbf', 'regions_info.csv', 'dem_filled.tif', 'median.tif', 'dem_diff.tif']
###Markdown
Displaying resultsThis section demonstrates how to display images on Jupyter Notebook. Three Python packages are used here, including [matplotlib](https://matplotlib.org/), [imageio](https://imageio.readthedocs.io/en/stable/installation.html), and [tifffile](https://pypi.org/project/tifffile/). These three packages can be installed using the following command:`pip install matplotlib imageio tifffile` **Import the libraries.**
###Code
# comment out the third line (%matplotlib inline) if you run the tutorial in other IDEs other than Jupyter Notebook
import matplotlib.pyplot as plt
import imageio
%matplotlib inline
###Output
_____no_output_____
###Markdown
**Display one single image.**
###Code
raster = imageio.imread(os.path.join(data_dir, 'dem.tif'))
plt.imshow(raster)
plt.show()
###Output
_____no_output_____
###Markdown
**Read images as numpy arrays.**
###Code
smoothed = imageio.imread(os.path.join(out_dir, 'median.tif'))
sink = imageio.imread(os.path.join(out_dir, 'sink.tif'))
dep_id = imageio.imread(os.path.join(out_dir, 'depression_id.tif'))
dep_level = imageio.imread(os.path.join(out_dir, 'depression_level.tif'))
###Output
_____no_output_____
###Markdown
**Display multiple images in one plot.**
###Code
fig=plt.figure(figsize=(16,16))
ax1 = fig.add_subplot(2, 2, 1)
ax1.set_title('Smoothed DEM')
plt.imshow(smoothed)
ax2 = fig.add_subplot(2, 2, 2)
ax2.set_title('DEM Sinks')
plt.imshow(sink)
ax3 = fig.add_subplot(2, 2, 3)
ax3.set_title('Depression Unique ID')
plt.imshow(dep_id)
ax4 = fig.add_subplot(2, 2, 4)
ax4.set_title('Depression Level')
plt.imshow(dep_level)
plt.show()
###Output
_____no_output_____
###Markdown
A tutorial for the lidar Python packageThis notebook demonstrates the usage of the **lidar** Python package for terrain and hydrological analysis. It is useful for analyzing high-resolution topographic data, such as digital elevation models (DEMs) derived from Light Detection and Ranging (LiDAR) data.* GitHub repo: https://github.com/giswqs/lidar* Documentation: https://lidar.readthedocs.io.* PyPI: https://pypi.org/project/lidar/* Binder: https://gishub.org/lidar-cloud* Free software: [MIT license](https://opensource.org/licenses/MIT)This tutorial can be accessed in three ways:* HTML version: https://gishub.org/lidar-html* Viewable Notebook: https://gishub.org/lidar-notebook* Interactive Notebook: https://gishub.org/lidar-cloud**Launch this tutorial as an interactive Jupyter Notebook on the cloud - [MyBinder.org](https://gishub.org/lidar-cloud).**![lidar-gif](https://i.imgur.com/aIttPVG.gif) Table of Content* [Installation](Installation)* [Getting data](Getting-data)* [Using lidar](Using-lidar)* [Displaying results](Displaying-results)* [lidar GUI](lidar-GUI)* [Citing lidar](Citing-lidar)* [Credits](Credits)* [Contact](Contact) InstallationThe **lidar** Python package supports a variety of platforms, including Microsoft Windows, macOS, and Linux operating systems. Note that you will need to have **Python 3.x** installed. Python 2.x is not supported. The **lidar** Python package can be installed using the following command:`pip install lidar`If you have installed **lidar** Python package before and want to upgrade to the latest version, you can use the following command:`pip install lidar -U`If you encounter any installation issues, please check [Dependencies](https://github.com/giswqs/lidardependencies) on the **lidar** GitHub page and [Report Bugs](https://github.com/giswqs/lidar/issues). Getting data This section demonstrates two ways to get data into Binder so that you can test the **lidar** Python package on the cloud using your own data. * [Getting data from direct URLs](Getting-data-from-direct-URLs) * [Getting data from Google Drive](Getting-data-from-Google-Drive) Getting data from direct URLsIf you have data hosted on your own HTTP server or GitHub, you should be able to get direct URLs. With a direct URL, users can automatically download the data when the URL is clicked. For example http://wetlands.io/file/data/lidar-dem.zip Import the following Python libraries and start getting data from direct URLs.
###Code
import os
import zipfile
import tarfile
import shutil
import urllib.request
###Output
_____no_output_____
###Markdown
Create a folder named *lidar* under the user home folder and set it as the working directory.
###Code
work_dir = os.path.join(os.path.expanduser("~"), 'lidar')
if not os.path.exists(work_dir):
os.mkdir(work_dir)
# os.chdir(work_dir)
print("Working directory: {}".format(work_dir))
###Output
Working directory: /home/qiusheng/lidar
###Markdown
Replace the following URL with your own direct URL hosting the data you would like to use.
###Code
url = "https://github.com/giswqs/lidar/raw/master/examples/lidar-dem.zip"
###Output
_____no_output_____
###Markdown
Download data the from the above URL and unzip the file if needed.
###Code
# download the file
zip_name = os.path.basename(url)
zip_path = os.path.join(work_dir, zip_name)
print('Downloading {} ...'.format(zip_name))
urllib.request.urlretrieve(url, zip_path)
print('Downloading done.'.format(zip_name))
# if it is a zip file
if '.zip' in zip_name:
print("Unzipping {} ...".format(zip_name))
with zipfile.ZipFile(zip_name, "r") as zip_ref:
zip_ref.extractall(work_dir)
print('Unzipping done.')
# if it is a tar file
if '.tar' in zip_name:
print("Unzipping {} ...".format(zip_name))
with tarfile.open(zip_name, "r") as tar_ref:
tar_ref.extractall(work_dir)
print('Unzipping done.')
print('Data directory: {}'.format(os.path.splitext(zip_path)[0]))
###Output
Downloading lidar-dem.zip ...
Downloading done.
Unzipping lidar-dem.zip ...
Unzipping done.
Data directory: /home/qiusheng/lidar/lidar-dem
###Markdown
You have successfully downloaded data to Binder. Therefore, you can skip to [Using lidar](Using-lidar) and start testing **lidar** with your own data. Getting data from Google DriveAlternatively, you can upload data to [Google Drive](https://www.google.com/drive/) and then [share files publicly from Google Drive](https://support.google.com/drive/answer/2494822?co=GENIE.Platform%3DDesktop&hl=en). Once the file is shared publicly, you should be able to get a shareable URL. For example, https://drive.google.com/file/d/1c6v-ep5-klb2J32Nuu1rSyqAc8kEtmdh. To download files from Google Drive to Binder, you can use the Python package called [google-drive-downloader](https://github.com/ndrplz/google-drive-downloader), which can be installed using the following command:`pip install googledrivedownloader requests` **Replace the following URL with your own shareable URL from Google Drive.**
###Code
gfile_url = 'https://drive.google.com/file/d/1c6v-ep5-klb2J32Nuu1rSyqAc8kEtmdh'
###Output
_____no_output_____
###Markdown
**Extract the file id from the above URL.**
###Code
file_id = gfile_url.split('/')[5] #'1c6v-ep5-klb2J32Nuu1rSyqAc8kEtmdh'
print('Google Drive file id: {}'.format(file_id))
###Output
Google Drive file id: 1c6v-ep5-klb2J32Nuu1rSyqAc8kEtmdh
###Markdown
**Download the shared file from Google Drive.**
###Code
from google_drive_downloader import GoogleDriveDownloader as gdd
dest_path = './lidar-dem.zip' # choose a name for the downloaded file
gdd.download_file_from_google_drive(file_id, dest_path, unzip=True)
###Output
_____no_output_____
###Markdown
You have successfully downloaded data from Google Drive to Binder. You can now continue to [Using lidar](Using-lidar) and start testing **lidar** with your own data. Using lidar Here you can specify where your data are located. In this example, we will use [dem.tif](https://github.com/giswqs/lidar/blob/master/examples/lidar-dem/dem.tif), which has been downloaded to the *lidar-dem* folder. **Import the lidar package.**
###Code
import lidar
###Output
_____no_output_____
###Markdown
**List data under the data folder.**
###Code
data_dir = './lidar-dem/'
print(os.listdir(data_dir))
###Output
['sink.tif', 'dem.tif', 'dsm.tif']
###Markdown
**Create a temporary folder to save results.**
###Code
out_dir = os.path.join(os.getcwd(), "temp")
if not os.path.exists(out_dir):
os.mkdir(out_dir)
###Output
_____no_output_____
###Markdown
In this simple example, we smooth [dem.tif](https://github.com/giswqs/lidar/blob/master/examples/lidar-dem/dem.tif) using a median filter. Then we extract sinks (i.e., depressions) from the DEM. Finally, we delineate nested depression hierarchy using the [level-set algorithm](https://doi.org/10.1111/1752-1688.12689). **Set parameters for the level-set algorithm.**
###Code
min_size = 1000 # minimum number of pixels as a depression
min_depth = 0.3 # minimum depth as a depression
interval = 0.3 # slicing interval for the level-set method
bool_shp = False # output shapefiles for each individual level
###Output
_____no_output_____
###Markdown
**Smooth the original DEM using a median filter.**
###Code
# extracting sinks based on user-defined minimum depression size
in_dem = os.path.join(data_dir, 'dem.tif')
out_dem = os.path.join(out_dir, "median.tif")
in_dem = lidar.MedianFilter(in_dem, kernel_size=3, out_file=out_dem)
###Output
Median filtering ...
Run time: 0.0190 seconds
Saving dem ...
###Markdown
**Extract DEM sinks using a depression-filling algorithm.**
###Code
sink = lidar.ExtractSinks(in_dem, min_size, out_dir)
###Output
Loading data ...
min = 379.70, max = 410.72, no_data = -3.402823e+38, cell_size = 1.0
Depression filling ...
Saving filled dem ...
Region grouping ...
Computing properties ...
Saving sink dem ...
Saving refined dem ...
Converting raster to vector ...
Total run time: 0.1093 s
###Markdown
**Identify depression nested hierarchy using the level-set algorithm.**
###Code
dep_id, dep_level = lidar.DelineateDepressions(sink, min_size, min_depth, interval, out_dir, bool_shp)
###Output
Reading data ...
rows, cols: (400, 400)
Pixel resolution: 1.0
Read data time: 0.0024 seconds
Data preparation time: 0.0100 seconds
Total number of regions: 1
Processing Region # 1 ...
=========== Run time statistics ===========
(rows, cols): (400, 400)
Pixel resolution: 1.0 m
Number of regions: 1
Data preparation time: 0.0100 s
Identify level time: 0.3347 s
Write image time: 0.0164 s
Polygonize time: 0.0098 s
Total run time: 0.3719 s
###Markdown
**Print the list of output files.**
###Code
print('Results are saved in: {}'.format(out_dir))
print(os.listdir(out_dir))
###Output
Results are saved in: /media/hdd/Dropbox/git/lidar/examples/temp
['depressions.dbf', 'depressions.prj', 'regions_info.csv', 'regions.shp', 'region.tif', 'depression_level.tif', 'depressions.shx', 'depression_id.tif', 'depressions_info.csv', 'depth.tif', 'depressions.shp', 'median.tif', 'dem_diff.tif', 'regions.shx', 'sink.tif', 'dem_filled.tif', 'dem.tif', 'regions.dbf', 'regions.prj']
###Markdown
Displaying resultsThis section demonstrates how to display images on Jupyter Notebook. Three Python packages are used here, including [matplotlib](https://matplotlib.org/), [imageio](https://imageio.readthedocs.io/en/stable/installation.html), and [tifffile](https://pypi.org/project/tifffile/). These three packages can be installed using the following command:`pip install matplotlib imageio tifffile` **Import the libraries.**
###Code
# comment out the third line (%matplotlib inline) if you run the tutorial in other IDEs other than Jupyter Notebook
import matplotlib.pyplot as plt
import imageio
%matplotlib inline
###Output
_____no_output_____
###Markdown
**Display one single image.**
###Code
raster = imageio.imread(os.path.join(data_dir, 'dem.tif'))
plt.imshow(raster)
plt.show()
###Output
_____no_output_____
###Markdown
**Read images as numpy arrays.**
###Code
smoothed = imageio.imread(os.path.join(out_dir, 'median.tif'))
sink = imageio.imread(os.path.join(out_dir, 'sink.tif'))
dep_id = imageio.imread(os.path.join(out_dir, 'depression_id.tif'))
dep_level = imageio.imread(os.path.join(out_dir, 'depression_level.tif'))
###Output
_____no_output_____
###Markdown
**Display multiple images in one plot.**
###Code
fig=plt.figure(figsize=(16,16))
ax1 = fig.add_subplot(2, 2, 1)
ax1.set_title('Smoothed DEM')
plt.imshow(smoothed)
ax2 = fig.add_subplot(2, 2, 2)
ax2.set_title('DEM Sinks')
plt.imshow(sink)
ax3 = fig.add_subplot(2, 2, 3)
ax3.set_title('Depression Unique ID')
plt.imshow(dep_id)
ax4 = fig.add_subplot(2, 2, 4)
ax4.set_title('Depression Level')
plt.imshow(dep_level)
plt.show()
###Output
_____no_output_____
###Markdown
A tutorial for the lidar Python packageThis notebook demonstrates the usage of the **lidar** Python package for terrain and hydrological analysis. It is useful for analyzing high-resolution topographic data, such as digital elevation models (DEMs) derived from Light Detection and Ranging (LiDAR) data.* GitHub repo: https://github.com/giswqs/lidar* Documentation: https://lidar.gishub.org* PyPI: https://pypi.org/project/lidar* Binder: https://gishub.org/lidar-cloud* Free software: [MIT license](https://opensource.org/licenses/MIT)This tutorial can be accessed in three ways:* HTML version: https://gishub.org/lidar-html* Viewable Notebook: https://gishub.org/lidar-notebook* Interactive Notebook: https://gishub.org/lidar-cloud**Launch this tutorial as an interactive Jupyter Notebook on the cloud - [MyBinder.org](https://gishub.org/lidar-cloud).**![lidar-gif](https://i.imgur.com/aIttPVG.gif) Table of Content* [Installation](Installation)* [Getting data](Getting-data)* [Using lidar](Using-lidar)* [Displaying results](Displaying-results)* [lidar GUI](lidar-GUI)* [Citing lidar](Citing-lidar)* [Credits](Credits)* [Contact](Contact) InstallationThe **lidar** Python package supports a variety of platforms, including Microsoft Windows, macOS, and Linux operating systems. Note that you will need to have **Python 3.x** installed. Python 2.x is not supported. The **lidar** Python package can be installed using the following command:`pip install lidar`If you have installed **lidar** Python package before and want to upgrade to the latest version, you can use the following command:`pip install lidar -U`If you encounter any installation issues, please check [Dependencies](https://github.com/giswqs/lidardependencies) on the **lidar** GitHub page and [Report Bugs](https://github.com/giswqs/lidar/issues). Getting data This section demonstrates two ways to get data into Binder so that you can test the **lidar** Python package on the cloud using your own data. * [Getting data from direct URLs](Getting-data-from-direct-URLs) * [Getting data from Google Drive](Getting-data-from-Google-Drive) Getting data from direct URLsIf you have data hosted on your own HTTP server or GitHub, you should be able to get direct URLs. With a direct URL, users can automatically download the data when the URL is clicked. For example http://wetlands.io/file/data/lidar-dem.zip Import the following Python libraries and start getting data from direct URLs.
###Code
import os
import zipfile
import tarfile
import shutil
import urllib.request
###Output
_____no_output_____
###Markdown
Create a folder named *lidar* under the user home folder and set it as the working directory.
###Code
work_dir = os.path.join(os.path.expanduser("~"), 'lidar')
if not os.path.exists(work_dir):
os.mkdir(work_dir)
# os.chdir(work_dir)
print("Working directory: {}".format(work_dir))
###Output
Working directory: /home/qiusheng/lidar
###Markdown
Replace the following URL with your own direct URL hosting the data you would like to use.
###Code
url = "https://github.com/giswqs/lidar/raw/master/examples/lidar-dem.zip"
###Output
_____no_output_____
###Markdown
Download data the from the above URL and unzip the file if needed.
###Code
# download the file
zip_name = os.path.basename(url)
zip_path = os.path.join(work_dir, zip_name)
print('Downloading {} ...'.format(zip_name))
urllib.request.urlretrieve(url, zip_path)
print('Downloading done.'.format(zip_name))
# if it is a zip file
if '.zip' in zip_name:
print("Unzipping {} ...".format(zip_name))
with zipfile.ZipFile(zip_name, "r") as zip_ref:
zip_ref.extractall(work_dir)
print('Unzipping done.')
# if it is a tar file
if '.tar' in zip_name:
print("Unzipping {} ...".format(zip_name))
with tarfile.open(zip_name, "r") as tar_ref:
tar_ref.extractall(work_dir)
print('Unzipping done.')
print('Data directory: {}'.format(os.path.splitext(zip_path)[0]))
###Output
Downloading lidar-dem.zip ...
Downloading done.
Unzipping lidar-dem.zip ...
Unzipping done.
Data directory: /home/qiusheng/lidar/lidar-dem
###Markdown
You have successfully downloaded data to Binder. Therefore, you can skip to [Using lidar](Using-lidar) and start testing **lidar** with your own data. Getting data from Google DriveAlternatively, you can upload data to [Google Drive](https://www.google.com/drive/) and then [share files publicly from Google Drive](https://support.google.com/drive/answer/2494822?co=GENIE.Platform%3DDesktop&hl=en). Once the file is shared publicly, you should be able to get a shareable URL. For example, https://drive.google.com/file/d/1c6v-ep5-klb2J32Nuu1rSyqAc8kEtmdh. To download files from Google Drive to Binder, you can use the Python package called [google-drive-downloader](https://github.com/ndrplz/google-drive-downloader), which can be installed using the following command:`pip install googledrivedownloader requests` **Replace the following URL with your own shareable URL from Google Drive.**
###Code
gfile_url = 'https://drive.google.com/file/d/1c6v-ep5-klb2J32Nuu1rSyqAc8kEtmdh'
###Output
_____no_output_____
###Markdown
**Extract the file id from the above URL.**
###Code
file_id = gfile_url.split('/')[5] #'1c6v-ep5-klb2J32Nuu1rSyqAc8kEtmdh'
print('Google Drive file id: {}'.format(file_id))
###Output
Google Drive file id: 1c6v-ep5-klb2J32Nuu1rSyqAc8kEtmdh
###Markdown
**Download the shared file from Google Drive.**
###Code
from google_drive_downloader import GoogleDriveDownloader as gdd
dest_path = './lidar-dem.zip' # choose a name for the downloaded file
gdd.download_file_from_google_drive(file_id, dest_path, unzip=True)
###Output
_____no_output_____
###Markdown
You have successfully downloaded data from Google Drive to Binder. You can now continue to [Using lidar](Using-lidar) and start testing **lidar** with your own data. Using lidar Here you can specify where your data are located. In this example, we will use [dem.tif](https://github.com/giswqs/lidar/blob/master/examples/lidar-dem/dem.tif), which has been downloaded to the *lidar-dem* folder. **Import the lidar package.**
###Code
import lidar
###Output
_____no_output_____
###Markdown
**List data under the data folder.**
###Code
data_dir = './lidar-dem/'
print(os.listdir(data_dir))
###Output
['sink.tif', 'dem.tif', 'dsm.tif']
###Markdown
**Create a temporary folder to save results.**
###Code
out_dir = os.path.join(os.getcwd(), "temp")
if not os.path.exists(out_dir):
os.mkdir(out_dir)
###Output
_____no_output_____
###Markdown
In this simple example, we smooth [dem.tif](https://github.com/giswqs/lidar/blob/master/examples/lidar-dem/dem.tif) using a median filter. Then we extract sinks (i.e., depressions) from the DEM. Finally, we delineate nested depression hierarchy using the [level-set algorithm](https://doi.org/10.1111/1752-1688.12689). **Set parameters for the level-set algorithm.**
###Code
min_size = 1000 # minimum number of pixels as a depression
min_depth = 0.3 # minimum depth as a depression
interval = 0.3 # slicing interval for the level-set method
bool_shp = False # output shapefiles for each individual level
###Output
_____no_output_____
###Markdown
**Smooth the original DEM using a median filter.**
###Code
# extracting sinks based on user-defined minimum depression size
in_dem = os.path.join(data_dir, 'dem.tif')
out_dem = os.path.join(out_dir, "median.tif")
in_dem = lidar.MedianFilter(in_dem, kernel_size=3, out_file=out_dem)
###Output
Median filtering ...
Run time: 0.0190 seconds
Saving dem ...
###Markdown
**Extract DEM sinks using a depression-filling algorithm.**
###Code
sink = lidar.ExtractSinks(in_dem, min_size, out_dir)
###Output
Loading data ...
min = 379.70, max = 410.72, no_data = -3.402823e+38, cell_size = 1.0
Depression filling ...
Saving filled dem ...
Region grouping ...
Computing properties ...
Saving sink dem ...
Saving refined dem ...
Converting raster to vector ...
Total run time: 0.1093 s
###Markdown
**Identify depression nested hierarchy using the level-set algorithm.**
###Code
dep_id, dep_level = lidar.DelineateDepressions(sink, min_size, min_depth, interval, out_dir, bool_shp)
###Output
Reading data ...
rows, cols: (400, 400)
Pixel resolution: 1.0
Read data time: 0.0024 seconds
Data preparation time: 0.0100 seconds
Total number of regions: 1
Processing Region # 1 ...
=========== Run time statistics ===========
(rows, cols): (400, 400)
Pixel resolution: 1.0 m
Number of regions: 1
Data preparation time: 0.0100 s
Identify level time: 0.3347 s
Write image time: 0.0164 s
Polygonize time: 0.0098 s
Total run time: 0.3719 s
###Markdown
**Print the list of output files.**
###Code
print('Results are saved in: {}'.format(out_dir))
print(os.listdir(out_dir))
###Output
Results are saved in: /media/hdd/Dropbox/git/lidar/examples/temp
['depressions.dbf', 'depressions.prj', 'regions_info.csv', 'regions.shp', 'region.tif', 'depression_level.tif', 'depressions.shx', 'depression_id.tif', 'depressions_info.csv', 'depth.tif', 'depressions.shp', 'median.tif', 'dem_diff.tif', 'regions.shx', 'sink.tif', 'dem_filled.tif', 'dem.tif', 'regions.dbf', 'regions.prj']
###Markdown
Displaying resultsThis section demonstrates how to display images on Jupyter Notebook. Three Python packages are used here, including [matplotlib](https://matplotlib.org/), [imageio](https://imageio.readthedocs.io/en/stable/installation.html), and [tifffile](https://pypi.org/project/tifffile/). These three packages can be installed using the following command:`pip install matplotlib imageio tifffile` **Import the libraries.**
###Code
# comment out the third line (%matplotlib inline) if you run the tutorial in other IDEs other than Jupyter Notebook
import matplotlib.pyplot as plt
import imageio
%matplotlib inline
###Output
_____no_output_____
###Markdown
**Display one single image.**
###Code
raster = imageio.imread(os.path.join(data_dir, 'dem.tif'))
plt.imshow(raster)
plt.show()
###Output
_____no_output_____
###Markdown
**Read images as numpy arrays.**
###Code
smoothed = imageio.imread(os.path.join(out_dir, 'median.tif'))
sink = imageio.imread(os.path.join(out_dir, 'sink.tif'))
dep_id = imageio.imread(os.path.join(out_dir, 'depression_id.tif'))
dep_level = imageio.imread(os.path.join(out_dir, 'depression_level.tif'))
###Output
_____no_output_____
###Markdown
**Display multiple images in one plot.**
###Code
fig=plt.figure(figsize=(16,16))
ax1 = fig.add_subplot(2, 2, 1)
ax1.set_title('Smoothed DEM')
plt.imshow(smoothed)
ax2 = fig.add_subplot(2, 2, 2)
ax2.set_title('DEM Sinks')
plt.imshow(sink)
ax3 = fig.add_subplot(2, 2, 3)
ax3.set_title('Depression Unique ID')
plt.imshow(dep_id)
ax4 = fig.add_subplot(2, 2, 4)
ax4.set_title('Depression Level')
plt.imshow(dep_level)
plt.show()
###Output
_____no_output_____ |
Capstone Part 2c - Classical ML Models (Mean MFCCs without Offset).ipynb | ###Markdown
Capstone Part 2c - Classical ML Models (Mean MFCCs without Offset)___ Setup
###Code
# Basic packages
import numpy as np
import pandas as pd
# For splitting the data into training and test sets
from sklearn.model_selection import train_test_split
# For scaling the data as necessary
from sklearn.preprocessing import StandardScaler
# For doing principal component analysis as necessary
from sklearn.decomposition import PCA
# For visualizations
import matplotlib.pyplot as plt
%matplotlib inline
# For building a variety of models
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
# For hyperparameter optimization
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
# For caching pipeline and grid search results
from tempfile import mkdtemp
# For model evaluation
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
# For getting rid of warning messages
import warnings
warnings.filterwarnings('ignore')
# Loading in the finished dataframe from part 1
df = pd.read_csv('C:/Users/Patrick/Documents/Capstone Data/ravdess_mfcc_mean.csv')
df.head()
###Output
_____no_output_____
###Markdown
___ Building Models for Classifying Gender (Regardless of Emotion)
###Code
# Splitting the dataframe into features and target
X = df.iloc[:, :-2]
g = df['Gender']
###Output
_____no_output_____
###Markdown
The convention is to name the target variable 'y', but I will be declaring many different target variables throughout the notebook, so I opted for 'g' for simplicity instead of 'y_g' or 'y_gen', for example.
###Code
# Splitting the data into training and test sets
X_train, X_test, g_train, g_test = train_test_split(X, g, test_size=0.3, stratify=g, random_state=1)
# Checking the shapes
print(X_train.shape)
print(X_test.shape)
print(g_train.shape)
print(g_test.shape)
###Output
(1006, 130)
(432, 130)
(1006,)
(432,)
###Markdown
I want to build a simple, initial classifier to get a sense of the performances I might get in more optimized models. To this end, I will build a logistic regression model without doing any cross-validation or hyperparameter optimization.
###Code
# Instantiate the model
initial_logreg = LogisticRegression()
# Fit to training set
initial_logreg.fit(X_train, g_train)
# Score on training set
print(f'Model accuracy on training set: {initial_logreg.score(X_train, g_train)*100}%')
# Score on test set
print(f'Model accuracy on test set: {initial_logreg.score(X_test, g_test)*100}%')
###Output
Model accuracy on training set: 88.36978131212724%
Model accuracy on test set: 80.0925925925926%
###Markdown
___ Building Models for Classifying Emotion for Males
###Code
# Making a new dataframe that contains only male recordings
male_df = df[df['Gender'] == 'male'].reset_index().drop('index', axis=1)
# Splitting the dataframe into features and target
Xm = male_df.iloc[:, :-2]
em = male_df['Emotion']
# Splitting the data into training and test sets
Xm_train, Xm_test, em_train, em_test = train_test_split(Xm, em, test_size=0.3, stratify=em, random_state=1)
# Checking the shapes
print(Xm_train.shape)
print(Xm_test.shape)
print(em_train.shape)
print(em_test.shape)
###Output
(502, 130)
(216, 130)
(502,)
(216,)
###Markdown
As before, I will try building an initial model.
###Code
# Instantiate the model
initial_logreg_em = LogisticRegression()
# Fit to training set
initial_logreg_em.fit(Xm_train, em_train)
# Score on training set
print(f'Model accuracy on training set: {initial_logreg_em.score(Xm_train, em_train)*100}%')
# Score on test set
print(f'Model accuracy on test set: {initial_logreg_em.score(Xm_test, em_test)*100}%')
# Having initial_logreg_em make predictions based on the test set features
em_pred = initial_logreg_em.predict(Xm_test)
# Building the confusion matrix as a dataframe
emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised']
em_confusion_df = pd.DataFrame(confusion_matrix(em_test, em_pred))
em_confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions]
em_confusion_df.index = [f'Actual {emotion}' for emotion in emotions]
em_confusion_df
# Classification report
print(classification_report(em_test, em_pred))
# PCA on unscaled features
# Instantiate PCA and fit to Xm_train
pca = PCA().fit(Xm_train)
# Transform Xm_train
Xm_train_pca = pca.transform(Xm_train)
# Transform Xm_test
Xm_test_pca = pca.transform(Xm_test)
# Standard scaling
# Instantiate the scaler and fit to Xm_train
scaler = StandardScaler().fit(Xm_train)
# Transform Xm_train
Xm_train_scaled = scaler.transform(Xm_train)
# Transform Xm_test
Xm_test_scaled = scaler.transform(Xm_test)
# PCA on scaled features
# Instantiate PCA and fit to Xm_train_scaled
pca_scaled = PCA().fit(Xm_train_scaled)
# Transform Xm_train_scaled
Xm_train_scaled_pca = pca_scaled.transform(Xm_train_scaled)
# Transform Xm_test_scaled
Xm_test_scaled_pca = pca_scaled.transform(Xm_test_scaled)
# Plot the explained variance ratios
plt.subplots(1, 2, figsize = (15, 5))
# Unscaled
plt.subplot(1, 2, 1)
plt.bar(np.arange(1, len(pca.explained_variance_ratio_)+1), pca.explained_variance_ratio_)
plt.xlabel('Principal Component')
plt.ylabel('Explained Variance Ratio')
plt.title('PCA on Unscaled Features')
plt.ylim(top = 0.6) # Equalizing the y-axes
# Scaled
plt.subplot(1, 2, 2)
plt.bar(np.arange(1, len(pca_scaled.explained_variance_ratio_)+1), pca_scaled.explained_variance_ratio_)
plt.xlabel('Principal Component')
plt.ylabel('Explained Variance Ratio')
plt.title('PCA on Scaled Features')
plt.ylim(top = 0.6) # Equalizing the y-axes
plt.tight_layout()
plt.show()
# Examining the variances
var_df = pd.DataFrame(male_df.var()).T
var_df
###Output
_____no_output_____
###Markdown
How much variance is explained by certain numbers of unscaled and scaled principal components? This will help me determine how many principal components to try in my grid searches later.
###Code
# Unscaled
num_components = [131, 51, 41, 31, 21, 16]
for n in num_components:
print(f'Variance explained by {n-1} unscaled principal components: {np.round(np.sum(pca.explained_variance_ratio_[:n])*100, 2)}%')
# Scaled
num_components = [131, 51, 41, 31, 21, 16]
for n in num_components:
print(f'Variance explained by {n-1} scaled principal components: {np.round(np.sum(pca_scaled.explained_variance_ratio_[:n])*100, 2)}%')
# Cache
cachedir = mkdtemp()
# Pipeline (these values are placeholders)
my_pipeline = Pipeline(steps=[('scaler', StandardScaler()), ('dim_reducer', PCA()), ('model', LogisticRegression())], memory=cachedir)
# Parameter grid for log reg
logreg_param_grid = [
# l1 without PCA
# unscaled and scaled * 9 regularization strengths = 18 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(penalty='l1', n_jobs=-1)],
'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l1 with PCA
# unscaled and scaled * 5 PCAs * 9 regularization strengths = 90 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': [15, 20, 30, 40, 50],
'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l2 (default) without PCA
# unscaled and scaled * 9 regularization strengths = 18 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)],
'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l2 (default) with PCA
# unscaled and scaled * 5 PCAs * 9 regularization strengths = 90 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': [15, 20, 30, 40, 50],
'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}
]
# Instantiate the log reg grid search
logreg_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=logreg_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the log reg grid search
fitted_logreg_grid_em = logreg_grid_search.fit(Xm_train, em_train)
# What was the best log reg?
fitted_logreg_grid_em.best_estimator_
print(f"The best log reg's accuracy on the training set: {fitted_logreg_grid_em.score(Xm_train, em_train)*100}%")
print(f"The best log reg's accuracy on the test set: {fitted_logreg_grid_em.score(Xm_test, em_test)*100}%")
# Parameter grid for SVM
svm_param_grid = [
# unscaled and scaled * 9 regularization strengths = 18 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [SVC()], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# unscaled and scaled * 5 PCAs * 9 regularization strengths = 90 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': [15, 20, 30, 40, 50], 'model': [SVC()],
'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}
]
# Instantiate the SVM grid search
svm_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=svm_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the SVM grid search
fitted_svm_grid_em = svm_grid_search.fit(Xm_train, em_train)
# What was the best SVM?
fitted_svm_grid_em.best_estimator_
print(f"The best SVM's accuracy on the training set: {fitted_svm_grid_em.score(Xm_train, em_train)*100}%")
print(f"The best SVM's accuracy on the test set: {fitted_svm_grid_em.score(Xm_test, em_test)*100}%")
# Parameter grid for KNN
knn_param_grid = [
# unscaled and scaled * 10 Ks = 20 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [KNeighborsClassifier(n_jobs=-1)], 'model__n_neighbors': np.arange(3, 22, 2)},
# unscaled and scaled * 5 PCAs * 10 Ks = 100 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': [15, 20, 30, 40, 50], 'model': [KNeighborsClassifier(n_jobs=-1)],
'model__n_neighbors': np.arange(3, 22, 2)}
]
# Instantiate the grid search
knn_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=knn_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the KNN grid search
fitted_knn_grid_em = knn_grid_search.fit(Xm_train, em_train)
# What was the best KNN model?
fitted_knn_grid_em.best_estimator_
print(f"The best KNN model's accuracy on the training set: {fitted_knn_grid_em.score(Xm_train, em_train)*100}%")
print(f"The best KNN model's accuracy on the test set: {fitted_knn_grid_em.score(Xm_test, em_test)*100}%")
# Parameter grid for random forest (scaling is unnecessary)
rf_param_grid = [
# 5 numbers of estimators * 5 max depths = 25 models
{'scaler': [None], 'dim_reducer': [None], 'model': [RandomForestClassifier(n_jobs=-1)], 'model__n_estimators': np.arange(100, 501, 100),
'model__max_depth': np.arange(5, 26, 5)},
# 5 PCAs * 5 numbers of estimators * 5 max depths = 150 models
{'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': [15, 20, 30, 40, 50], 'model': [RandomForestClassifier(n_jobs=-1)],
'model__n_estimators': np.arange(100, 501, 100), 'model__max_depth': np.arange(5, 26, 5)}
]
# Instantiate the rf grid search
rf_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=rf_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the rf grid search
fitted_rf_grid_em = rf_grid_search.fit(Xm_train, em_train)
# What was the best rf?
fitted_rf_grid_em.best_estimator_
print(f"The best random forest's accuracy on the training set: {fitted_rf_grid_em.score(Xm_train, em_train)*100}%")
print(f"The best random forest's accuracy on the test set: {fitted_rf_grid_em.score(Xm_test, em_test)*100}%")
###Output
The best random forest's accuracy on the training set: 100.0%
The best random forest's accuracy on the test set: 33.7962962962963%
###Markdown
___ Building Models for Classifying Emotion for Females
###Code
# Making a new dataframe that contains only female recordings
female_df = df[df['Gender'] == 'female'].reset_index().drop('index', axis=1)
# Splitting the dataframe into features and target
Xf = female_df.iloc[:, :-2]
ef = female_df['Emotion']
# Splitting the data into training and test sets
Xf_train, Xf_test, ef_train, ef_test = train_test_split(Xf, ef, test_size=0.3, stratify=ef, random_state=1)
# Checking the shapes
print(Xf_train.shape)
print(Xf_test.shape)
print(ef_train.shape)
print(ef_test.shape)
###Output
(504, 130)
(216, 130)
(504,)
(216,)
###Markdown
Here is an initial model:
###Code
# Instantiate the model
initial_logreg_ef = LogisticRegression()
# Fit to training set
initial_logreg_ef.fit(Xf_train, ef_train)
# Score on training set
print(f'Model accuracy on training set: {initial_logreg_ef.score(Xf_train, ef_train)*100}%')
# Score on test set
print(f'Model accuracy on test set: {initial_logreg_ef.score(Xf_test, ef_test)*100}%')
# Having initial_logreg_ef make predictions based on the test set features
ef_pred = initial_logreg_ef.predict(Xf_test)
# Building the confusion matrix as a dataframe
emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised']
ef_confusion_df = pd.DataFrame(confusion_matrix(ef_test, ef_pred))
ef_confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions]
ef_confusion_df.index = [f'Actual {emotion}' for emotion in emotions]
ef_confusion_df
# Classification report
print(classification_report(ef_test, ef_pred))
# PCA on unscaled features
# Instantiate PCA and fit to Xf_train
pca = PCA().fit(Xf_train)
# Transform Xf_train
Xf_train_pca = pca.transform(Xf_train)
# Transform Xf_test
Xf_test_pca = pca.transform(Xf_test)
# Standard scaling
# Instantiate the scaler and fit to Xf_train
scaler = StandardScaler().fit(Xf_train)
# Transform Xf_train
Xf_train_scaled = scaler.transform(Xf_train)
# Transform Xf_test
Xf_test_scaled = scaler.transform(Xf_test)
# PCA on scaled features
# Instantiate PCA and fit to Xf_train_scaled
pca_scaled = PCA().fit(Xf_train_scaled)
# Transform Xf_train_scaled
Xf_train_scaled_pca = pca_scaled.transform(Xf_train_scaled)
# Transform Xf_test_scaled
Xf_test_scaled_pca = pca_scaled.transform(Xf_test_scaled)
# Plot the explained variance ratios
plt.subplots(1, 2, figsize = (15, 5))
# Unscaled
plt.subplot(1, 2, 1)
plt.bar(np.arange(1, len(pca.explained_variance_ratio_)+1), pca.explained_variance_ratio_)
plt.xlabel('Principal Component')
plt.ylabel('Explained Variance Ratio')
plt.title('PCA on Unscaled Features')
plt.ylim(top = 0.6) # Equalizing the y-axes
# Scaled
plt.subplot(1, 2, 2)
plt.bar(np.arange(1, len(pca_scaled.explained_variance_ratio_)+1), pca_scaled.explained_variance_ratio_)
plt.xlabel('Principal Component')
plt.ylabel('Explained Variance Ratio')
plt.title('PCA on Scaled Features')
plt.ylim(top = 0.6) # Equalizing the y-axes
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
How much variance is explained by certain numbers of unscaled and scaled principal components? This will help me determine how many principal components to try in my grid searches later.
###Code
# Unscaled
num_components = [131, 51, 41, 31, 21, 16]
for n in num_components:
print(f'Variance explained by {n-1} unscaled principal components: {np.round(np.sum(pca.explained_variance_ratio_[:n])*100, 2)}%')
# Scaled
num_components = [131, 51, 41, 31, 21, 16]
for n in num_components:
print(f'Variance explained by {n-1} scaled principal components: {np.round(np.sum(pca_scaled.explained_variance_ratio_[:n])*100, 2)}%')
###Output
Variance explained by 130 scaled principal components: 100.0%
Variance explained by 50 scaled principal components: 99.17%
Variance explained by 40 scaled principal components: 98.55%
Variance explained by 30 scaled principal components: 97.42%
Variance explained by 20 scaled principal components: 95.27%
Variance explained by 15 scaled principal components: 93.18%
###Markdown
Like before, I will now do a grid search for each classifier type, with five-fold cross-validation to optimize the hyperparameters.
###Code
# Cache
cachedir = mkdtemp()
# Pipeline (these values are placeholders)
my_pipeline = Pipeline(steps=[('scaler', StandardScaler()), ('dim_reducer', PCA()), ('model', LogisticRegression())], memory=cachedir)
# Parameter grid for log reg
logreg_param_grid = [
# l1 without PCA
# unscaled and scaled * 9 regularization strengths = 18 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(penalty='l1', n_jobs=-1)],
'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l1 with PCA
# unscaled and scaled * 5 PCAs * 9 regularization strengths = 90 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': [15, 20, 30, 40, 50],
'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l2 (default) without PCA
# unscaled and scaled * 9 regularization strengths = 18 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)],
'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# l2 (default) with PCA
# unscaled and scaled * 5 PCAs * 9 regularization strengths = 90 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': [15, 20, 30, 40, 50],
'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}
]
# Instantiate the log reg grid search
logreg_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=logreg_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the log reg grid search
fitted_logreg_grid_ef = logreg_grid_search.fit(Xf_train, ef_train)
# What was the best log reg?
fitted_logreg_grid_ef.best_estimator_
print(f"The best log reg's accuracy on the training set: {fitted_logreg_grid_ef.score(Xf_train, ef_train)*100}%")
print(f"The best log reg's accuracy on the test set: {fitted_logreg_grid_ef.score(Xf_test, ef_test)*100}%")
# Parameter grid for SVM
svm_param_grid = [
# unscaled and scaled * 9 regularization strengths = 18 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [SVC()], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]},
# unscaled and scaled * 5 PCAs * 9 regularization strengths = 90 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': [15, 20, 30, 40, 50], 'model': [SVC()],
'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}
]
# Instantiate the SVM grid search
svm_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=svm_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the SVM grid search
fitted_svm_grid_ef = svm_grid_search.fit(Xf_train, ef_train)
# What was the best SVM?
fitted_svm_grid_ef.best_estimator_
print(f"The best SVM's accuracy on the training set: {fitted_svm_grid_ef.score(Xf_train, ef_train)*100}%")
print(f"The best SVM's accuracy on the test set: {fitted_svm_grid_ef.score(Xf_test, ef_test)*100}%")
# Parameter grid for KNN
knn_param_grid = [
# unscaled and scaled * 10 Ks = 20 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [KNeighborsClassifier(n_jobs=-1)], 'model__n_neighbors': np.arange(3, 22, 2)},
# unscaled and scaled * 5 PCAs * 10 Ks = 100 models
{'scaler': [None, StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': [15, 20, 30, 40, 50], 'model': [KNeighborsClassifier(n_jobs=-1)],
'model__n_neighbors': np.arange(3, 22, 2)}
]
# Instantiate the grid search
knn_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=knn_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the KNN grid search
fitted_knn_grid_em = knn_grid_search.fit(Xm_train, em_train)
# What was the best KNN model?
fitted_knn_grid_em.best_estimator_
print(f"The best KNN model's accuracy on the training set: {fitted_knn_grid_em.score(Xm_train, em_train)*100}%")
print(f"The best KNN model's accuracy on the test set: {fitted_knn_grid_em.score(Xm_test, em_test)*100}%")
# Parameter grid for random forest (scaling is unnecessary)
rf_param_grid = [
# 5 numbers of estimators * 5 max depths = 25 models
{'scaler': [None], 'dim_reducer': [None], 'model': [RandomForestClassifier(n_jobs=-1)], 'model__n_estimators': np.arange(100, 501, 100),
'model__max_depth': np.arange(5, 26, 5)},
# 5 PCAs * 5 numbers of estimators * 5 max depths = 150 models
{'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': [15, 20, 30, 40, 50], 'model': [RandomForestClassifier(n_jobs=-1)],
'model__n_estimators': np.arange(100, 501, 100), 'model__max_depth': np.arange(5, 26, 5)}
]
# Instantiate the rf grid search
rf_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=rf_param_grid, cv=5, n_jobs=-1, verbose=5)
# Fit the rf grid search
fitted_rf_grid_em = rf_grid_search.fit(Xm_train, em_train)
# What was the best rf?
fitted_rf_grid_em.best_estimator_
print(f"The best random forest's accuracy on the training set: {fitted_rf_grid_em.score(Xm_train, em_train)*100}%")
print(f"The best random forest's accuracy on the test set: {fitted_rf_grid_em.score(Xm_test, em_test)*100}%")
###Output
The best random forest's accuracy on the training set: 100.0%
The best random forest's accuracy on the test set: 37.96296296296296%
|
student-notebooks/08.00-Ligand-Docking-PyRosetta.ipynb | ###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = ""
COLLABORATORS = ""
###Output
_____no_output_____
###Markdown
--- *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* Ligand Refinement in PyRosetta (a.k.a. High-Resolution Local Docking) Using the `ligand.wts` Scorefunction *Warning*: This notebook uses `pyrosetta.distributed.viewer` code, which runs in `jupyter notebook` and might not run if you're using `jupyterlab`.
###Code
# Notebook setup
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.mount_pyrosetta_install()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
import logging
logging.basicConfig(level=logging.INFO)
import os
import pyrosetta
import pyrosetta.distributed
import pyrosetta.distributed.viewer as viewer
###Output
_____no_output_____
###Markdown
Initialize PyRosetta and setup the input pose:
###Code
params_file = "inputs/TPA.gasteiger.fa.params"
flags = f"""
-extra_res_fa {params_file} # Provide a custom TPA .params file
-ignore_unrecognized_res 1
-mute all
"""
pyrosetta.distributed.init(flags)
pose = pyrosetta.io.pose_from_file("inputs/test_lig.pdb")
###Output
_____no_output_____
###Markdown
Before we perform ligand refinement, let's take a look at the input `.pdb` file using the `pyrosetta.distributed.viewer` macromolecular visualizer:
###Code
chE = pyrosetta.rosetta.core.select.residue_selector.ChainSelector("E")
view = viewer.init(pose)
view.add(viewer.setStyle())
view.add(viewer.setStyle(command=({"hetflag": True}, {"stick": {"colorscheme": "brownCarbon", "radius": 0.2}})))
view.add(viewer.setSurface(residue_selector=chE, opacity=0.7, color='white'))
view.add(viewer.setHydrogenBonds())
view()
###Output
_____no_output_____
###Markdown
****Restart Jupyter Notebook kernel to properly re-initialize PyRosetta****
###Code
# Notebook setup
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.mount_pyrosetta_install()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
import logging
logging.basicConfig(level=logging.INFO)
import os
import pyrosetta
import pyrosetta.distributed
import pyrosetta.distributed.viewer as viewer
###Output
_____no_output_____
###Markdown
The following ligand refinement example was adapted from `~Rosetta/main/source/src/python/PyRosetta/src/demo/D120_Ligand_interface.py`:
###Code
def sample_ligand_interface(pdb_filename,
partners,
ligand_params=[""],
jobs=1,
job_output="ligand_output"):
"""
Performs ligand-protein docking using Rosetta fullatom docking
(DockingHighRes) on the ligand-protein complex in <pdb_filename>
using the relative chain <partners>. If the ligand parameters
(a .params file) are not defaultly loaded into PyRosetta,
<ligand_params> must supply the list of files including the ligand
parameters. <jobs> trajectories are performed with output
structures named <job_output>_(job#).pdb.
Note: Global docking, a problem solved by the Rosetta DockingProtocol,
requires interface detection and refinement as with other protocols,
these tasks are split into centroid (interface detection) and
high-resolution (interface refinement) methods without a centroid
representation, low-resolution ligand-protein prediction is not
possible and as such, only the high-resolution ligand-protein
interface refinement is available. If you add a perturbation or
randomization step, the high-resolution stages may fail. A perturbation
step CAN make this a global docking algorithm however the rigid-body
sampling preceding refinement requires extensive sampling to produce
accurate results and this algorithm spends most of its effort in
refinement (which may be useless for the predicted interface).
This script performs ligand-protein interface structure prediction but does NOT
perform global ligand-protein docking. Since there is no generic interface
detection, the input PDB file must have the ligand placed near the interface
that will be refined. If the DockMCMProtocol is applied to a pose
without placement near the interface, then the refinement may:
-waste steps sampling the wrong interface
-fail by predicting an incorrect interface very far from the true interface
-fail by separating the ligand from the protein (usually due to a clash)
DockMCMProtocol does not require an independent randomization or perturbation
step to "seed" its prediction.
Additional refinement steps may increase the accuracy of the predicted
conformation (see refinement.py). Drastic moves (large conformational changes)
should be avoided; if they precede the protocol, the problems above may occur,
if they succeed the protocol, the protocol results may be lost.
"""
# Declare working directory and output directory
working_dir = os.getcwd()
output_dir = "outputs"
if not os.path.exists(output_dir):
os.mkdir(output_dir)
# Initialize PyRosetta
pyrosetta.init()
# Create an empty pose from the desired PDB file
pose = pyrosetta.rosetta.core.pose.Pose()
# If the params list has contents, load .params files
# Note: this method of adding ligands to the ResidueTypeSet is unnecessary
# if you call pyrosetta.init("-extra_res_fa {}".format(ligand_params))
if len(ligand_params) != 0 and ligand_params[0] != "":
ligand_params = pyrosetta.Vector1(ligand_params)
res_set = pose.conformation().modifiable_residue_type_set_for_conf()
res_set.read_files_for_base_residue_types(ligand_params)
pose.conformation().reset_residue_type_set_for_conf(res_set)
# Load pdb_filename into pose
pyrosetta.io.pose_from_file(pose, pdb_filename)
# Setup the docking FoldTree
# the method setup_foldtree takes an input pose and sets its
# FoldTree to have jump 1 represent the relation between the two docking
# partners, the jump points are the residues closest to the centers of
# geometry for each partner with a cutpoint at the end of the chain,
# the second argument is a string specifying the relative chain orientation
# such as "A_B" of "LH_A", ONLY TWO BODY DOCKING is supported and the
# partners MUST have different chain IDs and be in the same pose (the
# same PDB), additional chains can be grouped with one of the partners,
# the "_" character specifies which bodies are separated
# the third argument...is currently unsupported but must be set (it is
# supposed to specify which jumps are movable, to support multibody
# docking...but Rosetta doesn't currently)
# the FoldTrees setup by this method are for TWO BODY docking ONLY!
dock_jump = 1 # jump number 1 is the inter-body jump
pyrosetta.rosetta.protocols.docking.setup_foldtree(pose,
partners,
pyrosetta.Vector1([dock_jump]))
# Create ScoreFunctions for centroid and fullatom docking
scorefxn = pyrosetta.create_score_function("ligand.wts")
# Setup the high resolution (fullatom) docking protocol using DockMCMProtocol.
docking = pyrosetta.rosetta.protocols.docking.DockMCMProtocol()
# Many of its options and settings can be set using the setter methods.
docking.set_scorefxn(scorefxn)
# Change directory temporarily for output
os.chdir(output_dir)
# Setup the PyJobDistributor
jd = pyrosetta.toolbox.py_jobdistributor.PyJobDistributor(job_output,
jobs, scorefxn,
compress=False)
# Set the native pose so that the output scorefile contains the pose rmsd metric
jd.native_pose = pose
# Optional: setup a PyMOLObserver
# pyrosetta.rosetta.protocols.moves.AddPyMOLObserver(test_pose, True)
# Perform protein-ligand docking
# counter = 0 # for pretty output to PyMOLObserver
while not jd.job_complete:
test_pose = pose.clone() # Reset test pose to original structure
# counter += 1 # Change the pose name, for pretty output to PyMOLObserver
# test_pose.pdb_info().name(job_output + '_' + str(counter))
# Perform docking and output to PyMOL:
docking.apply(test_pose)
# Write the decoy structure to disk:
jd.output_decoy(test_pose)
os.chdir(working_dir)
###Output
_____no_output_____
###Markdown
Let's test out the `sample_ligand_interface` function (takes ~2 minutes with `jobs=1`, which means nstruct is set to 1 in the `PyJobDistributor`):
###Code
if not os.getenv("DEBUG"):
sample_ligand_interface("inputs/test_lig.pdb", "E_X",
ligand_params=["inputs/TPA.gasteiger.fa.params"],
jobs=1,
job_output="test_lig")
###Output
_____no_output_____
###Markdown
*Interpreting Results:*The `PyJobDistributor` will output the lowest scoring pose for each trajectory(as a `.pdb` file), recording the score in `outputs/.fasc`. Generally,the decoy generated with the lowest score contains the best predictionfor the protein-ligand conformation. PDB files produced from docking will containboth docking partners in their predicted conformation. When inspecting thesePDB files (or the `PyMOLObserver` output) be aware that PyMOL can introduce orpredict bonds that do not exist, particularly for close atoms. This rarelyoccurs when using the PyMOLMover.keep_history feature (since PyRosetta willsample some conformation space that has clashes).The `PyMOLObserver` will output a series of structures directly produced by theDockingProtocol. Unfortunately, this may include intermediate structures thatdo not yield any insight into the protocol performance. A LARGE number ofstructures are output to PyMOL and your machine may have difficultyloading all of these structures. If this occurs, try changing the`PyMOLObserver` keep_history to False or running the protocol without the`PyMOLObserver`.Interface structure prediction is useful for considering what physicalproperties are important in the binding event and what conformational changesoccur. Once experienced using PyRosetta, you can easily write scripts toinvestigate the Rosetta score terms and structural characteristics. There is nogeneral interpretation of ligand-binding results. Although Rosetta score doesnot translate directly to physical meaning (it is not physical energy),splitting the docked partners and comparing the scores (after packing orrefinement) can indicate the strength of the bonding interaction. ****Restart Jupyter Notebook kernel to properly re-initialize PyRosetta****
###Code
import sys
# Notebook setup
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.mount_pyrosetta_install()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
import logging
logging.basicConfig(level=logging.INFO)
import os
import pyrosetta
import pyrosetta.distributed
import pyrosetta.distributed.viewer as viewer
params_file = "inputs/TPA.gasteiger.fa.params"
flags = f"""
-extra_res_fa {params_file}
-ignore_unrecognized_res 1
-mute all
"""
pyrosetta.distributed.init(flags)
pose = pyrosetta.io.pose_from_file("expected_outputs/test_lig_0.pdb")
###Output
_____no_output_____
###Markdown
After ligand refinement has completed, let's take a look at the output `.pdb` file using the `py3Dmol` module:
###Code
chE = pyrosetta.rosetta.core.select.residue_selector.ChainSelector("E")
view = viewer.init(pose)
view.add(viewer.setStyle())
view.add(viewer.setStyle(command=({"hetflag": True}, {"stick": {"colorscheme": "brownCarbon", "radius": 0.2}})))
view.add(viewer.setSurface(residue_selector=chE, opacity=0.7, color='white'))
view.add(viewer.setHydrogenBonds())
view()
###Output
_____no_output_____
###Markdown
*Coding challenge:*Below, write an alternate version of the function `sample_ligand_interface` called `ligand_refinement_from_command_line.py` with the following modifications: 1. Load ligands into the Rosetta database using the `pyrosetta.init()` method rather than by modification of the `ResidueTypeSet` database. 2. Change the scorefunction to `talaris2014`Run it from the command line (Note: the `optparse` module has already been added for you). *Note*: Notice that the first line of the following cell uses the ipython magic command `%%file` which writes the remainder of the cell contents to the file `outputs/ligand_refinement_from_command_line.py`:
###Code
%%file outputs/ligand_refinement_from_command_line.py
import optparse
import os
import pyrosetta
def sample_ligand_interface(pdb_filename,
partners,
ligand_params=[""],
jobs=1,
job_output="ligand_output"):
"""
Performs ligand-protein docking using Rosetta fullatom docking
(DockingHighRes) on the ligand-protein complex in <pdb_filename>
using the relative chain <partners>. If the ligand parameters
(a .params file) are not defaultly loaded into PyRosetta,
<ligand_params> must supply the list of files including the ligand
parameters. <jobs> trajectories are performed with output
structures named <job_output>_(job#).pdb.
Note: Global docking, a problem solved by the Rosetta DockingProtocol,
requires interface detection and refinement as with other protocols,
these tasks are split into centroid (interface detection) and
high-resolution (interface refinement) methods without a centroid
representation, low-resolution ligand-protein prediction is not
possible and as such, only the high-resolution ligand-protein
interface refinement is available. If you add a perturbation or
randomization step, the high-resolution stages may fail. A perturbation
step CAN make this a global docking algorithm however the rigid-body
sampling preceding refinement requires extensive sampling to produce
accurate results and this algorithm spends most of its effort in
refinement (which may be useless for the predicted interface).
This script performs ligand-protein interface structure prediction but does NOT
perform global ligand-protein docking. Since there is no generic interface
detection, the input PDB file must have the ligand placed near the interface
that will be refined. If the DockMCMProtocol is applied to a pose
without placement near the interface, then the refinement may:
-waste steps sampling the wrong interface
-fail by predicting an incorrect interface very far from the true interface
-fail by separating the ligand from the protein (usually due to a clash)
DockMCMProtocol does not require an independent randomization or perturbation
step to "seed" its prediction.
Additional refinement steps may increase the accuracy of the predicted
conformation (see refinement.py). Drastic moves (large conformational changes)
should be avoided; if they precede the protocol, the problems above may occur,
if they succeed the protocol, the protocol results may be lost.
"""
# Declare working directory and output directory
working_dir = os.getcwd()
output_dir = "outputs"
if not os.path.exists(output_dir):
os.mkdir(output_dir)
# Initialize PyRosetta
pyrosetta.init()
# Create an empty pose from the desired PDB file
pose = pyrosetta.rosetta.core.pose.Pose()
# If the params list has contents, load .params files
# Note: this method of adding ligands to the ResidueTypeSet is unnecessary
# if you call pyrosetta.init("-extra_res_fa {}".format(ligand_params))
if len(ligand_params) != 0 and ligand_params[0] != "":
ligand_params = pyrosetta.Vector1(ligand_params)
res_set = pose.conformation().modifiable_residue_type_set_for_conf()
res_set.read_files_for_base_residue_types(ligand_params)
pose.conformation().reset_residue_type_set_for_conf(res_set)
# Load pdb_filename into pose
pyrosetta.io.pose_from_file(pose, pdb_filename)
# Setup the docking FoldTree
# the method setup_foldtree takes an input pose and sets its
# FoldTree to have jump 1 represent the relation between the two docking
# partners, the jump points are the residues closest to the centers of
# geometry for each partner with a cutpoint at the end of the chain,
# the second argument is a string specifying the relative chain orientation
# such as "A_B" of "LH_A", ONLY TWO BODY DOCKING is supported and the
# partners MUST have different chain IDs and be in the same pose (the
# same PDB), additional chains can be grouped with one of the partners,
# the "_" character specifies which bodies are separated
# the third argument...is currently unsupported but must be set (it is
# supposed to specify which jumps are movable, to support multibody
# docking...but Rosetta doesn't currently)
# the FoldTrees setup by this method are for TWO BODY docking ONLY!
dock_jump = 1 # jump number 1 is the inter-body jump
pyrosetta.rosetta.protocols.docking.setup_foldtree(pose,
partners,
pyrosetta.Vector1([dock_jump]))
# Create a copy of the pose for testing
test_pose = pose.clone()
# Create ScoreFunctions for centroid and fullatom docking
scorefxn = pyrosetta.create_score_function("ligand")
# Setup the high resolution (fullatom) docking protocol using DockMCMProtocol.
docking = pyrosetta.rosetta.protocols.docking.DockMCMProtocol()
# Many of its options and settings can be set using the setter methods.
docking.set_scorefxn(scorefxn)
# Change directory temporarily for output
os.chdir(output_dir)
# Setup the PyJobDistributor
jd = pyrosetta.toolbox.py_jobdistributor.PyJobDistributor(job_output,
jobs, scorefxn,
compress=False)
# Set the native pose so that the output scorefile contains the pose rmsd metric
jd.native_pose = pose
# Optional: setup a PyMOLObserver
# pyrosetta.rosetta.protocols.moves.AddPyMOLObserver(test_pose, True)
# Perform protein-ligand docking
# counter = 0 # for pretty output to PyMOLObserver
while not jd.job_complete:
test_pose = pose.clone() # Reset test pose to original structure
# counter += 1 # Change the pose name, for pretty output to PyMOLObserver
# test_pose.pdb_info().name(job_output + '_' + str(counter))
docking.apply(test_pose) # Perform docking and output to PyMOL
# Write the decoy structure to disc
jd.output_decoy(test_pose)
os.chdir(working_dir)
if __name__ == "__main__":
# Declare parser object for managing input options
parser = optparse.OptionParser()
parser.add_option("--pdb_filename",
dest="pdb_filename",
help="The PDB file containing the ligand and protein to dock.")
parser.add_option("--partners",
dest="partners",
default = "A_X",
help="The relative chain partners for docking.")
parser.add_option("--ligand_params",
dest="ligand_params",
help="The ligand residue parameter file.")
parser.add_option("--jobs",
dest="jobs",
default="1",
help="The number of jobs (trajectories) to perform.")
parser.add_option("--job_output",
dest="job_output",
default = "ligand_output",
help="The name preceding all output, output PDB files and scorefile.")
(options, args) = parser.parse_args()
# Catch input erros
if not options.pdb_filename:
parser.error("pdb_filename not given!")
if not options.ligand_params:
parser.error("ligand_params not given!")
# Run ligand refinement protocol
sample_ligand_interface(pdb_filename=options.pdb_filename,
partners=options.partners,
ligand_params=options.ligand_params.split(","),
jobs=int(options.jobs),
job_output=options.job_output)
###Output
_____no_output_____
###Markdown
Run `outputs/ligand_refinement_from_command_line.py` from the command line within this Jupyter Notebook!
###Code
pdb_filename = "inputs/test_lig.pdb"
params_file = "inputs/TPA.gasteiger.fa.params"
if not os.getenv("DEBUG"):
%run expected_outputs/ligand_refinement_from_command_line.py \
--pdb_filename {pdb_filename} \
--ligand_params {params_file} \
--partners E_X \
--jobs 1 \
--job_output test_lig_command_line
###Output
_____no_output_____
###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = ""
COLLABORATORS = ""
###Output
_____no_output_____
###Markdown
--- *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* Ligand Refinement in PyRosetta (a.k.a. High-Resolution Local Docking) Using the `ligand.wts` Scorefunction *Warning*: This notebook uses `pyrosetta.distributed.viewer` code, which runs in `jupyter notebook` and might not run if you're using `jupyterlab`.
###Code
# Notebook setup
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.setup()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
import logging
logging.basicConfig(level=logging.INFO)
import os
import pyrosetta
import pyrosetta.distributed
import pyrosetta.distributed.viewer as viewer
###Output
_____no_output_____
###Markdown
Initialize PyRosetta and setup the input pose:
###Code
params_file = "inputs/TPA.gasteiger.fa.params"
flags = f"""
-extra_res_fa {params_file} # Provide a custom TPA .params file
-ignore_unrecognized_res 1
-mute all
"""
pyrosetta.distributed.init(flags)
pose = pyrosetta.io.pose_from_file("inputs/test_lig.pdb")
###Output
_____no_output_____
###Markdown
Before we perform ligand refinement, let's take a look at the input `.pdb` file using the `pyrosetta.distributed.viewer` macromolecular visualizer:
###Code
chE = pyrosetta.rosetta.core.select.residue_selector.ChainSelector("E")
view = viewer.init(pose)
view.add(viewer.setStyle())
view.add(viewer.setStyle(command=({"hetflag": True}, {"stick": {"colorscheme": "brownCarbon", "radius": 0.2}})))
view.add(viewer.setSurface(residue_selector=chE, opacity=0.7, color='white'))
view.add(viewer.setHydrogenBonds())
view()
###Output
_____no_output_____
###Markdown
****Restart Jupyter Notebook kernel to properly re-initialize PyRosetta****
###Code
# Notebook setup
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.setup()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
import logging
logging.basicConfig(level=logging.INFO)
import os
import pyrosetta
import pyrosetta.distributed
import pyrosetta.distributed.viewer as viewer
###Output
_____no_output_____
###Markdown
The following ligand refinement example was adapted from `~Rosetta/main/source/src/python/PyRosetta/src/demo/D120_Ligand_interface.py`:
###Code
def sample_ligand_interface(pdb_filename,
partners,
ligand_params=[""],
jobs=1,
job_output="ligand_output"):
"""
Performs ligand-protein docking using Rosetta fullatom docking
(DockingHighRes) on the ligand-protein complex in <pdb_filename>
using the relative chain <partners>. If the ligand parameters
(a .params file) are not defaultly loaded into PyRosetta,
<ligand_params> must supply the list of files including the ligand
parameters. <jobs> trajectories are performed with output
structures named <job_output>_(job#).pdb.
Note: Global docking, a problem solved by the Rosetta DockingProtocol,
requires interface detection and refinement as with other protocols,
these tasks are split into centroid (interface detection) and
high-resolution (interface refinement) methods without a centroid
representation, low-resolution ligand-protein prediction is not
possible and as such, only the high-resolution ligand-protein
interface refinement is available. If you add a perturbation or
randomization step, the high-resolution stages may fail. A perturbation
step CAN make this a global docking algorithm however the rigid-body
sampling preceding refinement requires extensive sampling to produce
accurate results and this algorithm spends most of its effort in
refinement (which may be useless for the predicted interface).
This script performs ligand-protein interface structure prediction but does NOT
perform global ligand-protein docking. Since there is no generic interface
detection, the input PDB file must have the ligand placed near the interface
that will be refined. If the DockMCMProtocol is applied to a pose
without placement near the interface, then the refinement may:
-waste steps sampling the wrong interface
-fail by predicting an incorrect interface very far from the true interface
-fail by separating the ligand from the protein (usually due to a clash)
DockMCMProtocol does not require an independent randomization or perturbation
step to "seed" its prediction.
Additional refinement steps may increase the accuracy of the predicted
conformation (see refinement.py). Drastic moves (large conformational changes)
should be avoided; if they precede the protocol, the problems above may occur,
if they succeed the protocol, the protocol results may be lost.
"""
# Declare working directory and output directory
working_dir = os.getcwd()
output_dir = "outputs"
if not os.path.exists(output_dir):
os.mkdir(output_dir)
# Initialize PyRosetta
pyrosetta.init()
# Create an empty pose from the desired PDB file
pose = pyrosetta.rosetta.core.pose.Pose()
# If the params list has contents, load .params files
# Note: this method of adding ligands to the ResidueTypeSet is unnecessary
# if you call pyrosetta.init("-extra_res_fa {}".format(ligand_params))
if len(ligand_params) != 0 and ligand_params[0] != "":
ligand_params = pyrosetta.Vector1(ligand_params)
res_set = pose.conformation().modifiable_residue_type_set_for_conf()
res_set.read_files_for_base_residue_types(ligand_params)
pose.conformation().reset_residue_type_set_for_conf(res_set)
# Load pdb_filename into pose
pyrosetta.io.pose_from_file(pose, pdb_filename)
# Setup the docking FoldTree
# the method setup_foldtree takes an input pose and sets its
# FoldTree to have jump 1 represent the relation between the two docking
# partners, the jump points are the residues closest to the centers of
# geometry for each partner with a cutpoint at the end of the chain,
# the second argument is a string specifying the relative chain orientation
# such as "A_B" of "LH_A", ONLY TWO BODY DOCKING is supported and the
# partners MUST have different chain IDs and be in the same pose (the
# same PDB), additional chains can be grouped with one of the partners,
# the "_" character specifies which bodies are separated
# the third argument...is currently unsupported but must be set (it is
# supposed to specify which jumps are movable, to support multibody
# docking...but Rosetta doesn't currently)
# the FoldTrees setup by this method are for TWO BODY docking ONLY!
dock_jump = 1 # jump number 1 is the inter-body jump
pyrosetta.rosetta.protocols.docking.setup_foldtree(pose,
partners,
pyrosetta.Vector1([dock_jump]))
# Create ScoreFunctions for centroid and fullatom docking
scorefxn = pyrosetta.create_score_function("ligand.wts")
# Setup the high resolution (fullatom) docking protocol using DockMCMProtocol.
docking = pyrosetta.rosetta.protocols.docking.DockMCMProtocol()
# Many of its options and settings can be set using the setter methods.
docking.set_scorefxn(scorefxn)
# Change directory temporarily for output
os.chdir(output_dir)
# Setup the PyJobDistributor
jd = pyrosetta.toolbox.py_jobdistributor.PyJobDistributor(job_output,
jobs, scorefxn,
compress=False)
# Set the native pose so that the output scorefile contains the pose rmsd metric
jd.native_pose = pose
# Optional: setup a PyMOLObserver
# pyrosetta.rosetta.protocols.moves.AddPyMOLObserver(test_pose, True)
# Perform protein-ligand docking
# counter = 0 # for pretty output to PyMOLObserver
while not jd.job_complete:
test_pose = pose.clone() # Reset test pose to original structure
# counter += 1 # Change the pose name, for pretty output to PyMOLObserver
# test_pose.pdb_info().name(job_output + '_' + str(counter))
# Perform docking and output to PyMOL:
docking.apply(test_pose)
# Write the decoy structure to disk:
jd.output_decoy(test_pose)
os.chdir(working_dir)
###Output
_____no_output_____
###Markdown
Let's test out the `sample_ligand_interface` function (takes ~2 minutes with `jobs=1`, which means nstruct is set to 1 in the `PyJobDistributor`):
###Code
if not os.getenv("DEBUG"):
sample_ligand_interface("inputs/test_lig.pdb", "E_X",
ligand_params=["inputs/TPA.gasteiger.fa.params"],
jobs=1,
job_output="test_lig")
###Output
_____no_output_____
###Markdown
*Interpreting Results:*The `PyJobDistributor` will output the lowest scoring pose for each trajectory(as a `.pdb` file), recording the score in `outputs/.fasc`. Generally,the decoy generated with the lowest score contains the best predictionfor the protein-ligand conformation. PDB files produced from docking will containboth docking partners in their predicted conformation. When inspecting thesePDB files (or the `PyMOLObserver` output) be aware that PyMOL can introduce orpredict bonds that do not exist, particularly for close atoms. This rarelyoccurs when using the PyMOLMover.keep_history feature (since PyRosetta willsample some conformation space that has clashes).The `PyMOLObserver` will output a series of structures directly produced by theDockingProtocol. Unfortunately, this may include intermediate structures thatdo not yield any insight into the protocol performance. A LARGE number ofstructures are output to PyMOL and your machine may have difficultyloading all of these structures. If this occurs, try changing the`PyMOLObserver` keep_history to False or running the protocol without the`PyMOLObserver`.Interface structure prediction is useful for considering what physicalproperties are important in the binding event and what conformational changesoccur. Once experienced using PyRosetta, you can easily write scripts toinvestigate the Rosetta score terms and structural characteristics. There is nogeneral interpretation of ligand-binding results. Although Rosetta score doesnot translate directly to physical meaning (it is not physical energy),splitting the docked partners and comparing the scores (after packing orrefinement) can indicate the strength of the bonding interaction. ****Restart Jupyter Notebook kernel to properly re-initialize PyRosetta****
###Code
import sys
# Notebook setup
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.setup()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
import logging
logging.basicConfig(level=logging.INFO)
import os
import pyrosetta
import pyrosetta.distributed
import pyrosetta.distributed.viewer as viewer
params_file = "inputs/TPA.gasteiger.fa.params"
flags = f"""
-extra_res_fa {params_file}
-ignore_unrecognized_res 1
-mute all
"""
pyrosetta.distributed.init(flags)
pose = pyrosetta.io.pose_from_file("expected_outputs/test_lig_0.pdb")
###Output
_____no_output_____
###Markdown
After ligand refinement has completed, let's take a look at the output `.pdb` file using the `py3Dmol` module:
###Code
chE = pyrosetta.rosetta.core.select.residue_selector.ChainSelector("E")
view = viewer.init(pose)
view.add(viewer.setStyle())
view.add(viewer.setStyle(command=({"hetflag": True}, {"stick": {"colorscheme": "brownCarbon", "radius": 0.2}})))
view.add(viewer.setSurface(residue_selector=chE, opacity=0.7, color='white'))
view.add(viewer.setHydrogenBonds())
view()
###Output
_____no_output_____
###Markdown
*Coding challenge:*Below, write an alternate version of the function `sample_ligand_interface` called `ligand_refinement_from_command_line.py` with the following modifications: 1. Load ligands into the Rosetta database using the `pyrosetta.init()` method rather than by modification of the `ResidueTypeSet` database. 2. Change the scorefunction to `talaris2014`Run it from the command line (Note: the `optparse` module has already been added for you). *Note*: Notice that the first line of the following cell uses the ipython magic command `%%file` which writes the remainder of the cell contents to the file `outputs/ligand_refinement_from_command_line.py`:
###Code
%%file outputs/ligand_refinement_from_command_line.py
import optparse
import os
import pyrosetta
def sample_ligand_interface(pdb_filename,
partners,
ligand_params=[""],
jobs=1,
job_output="ligand_output"):
"""
Performs ligand-protein docking using Rosetta fullatom docking
(DockingHighRes) on the ligand-protein complex in <pdb_filename>
using the relative chain <partners>. If the ligand parameters
(a .params file) are not defaultly loaded into PyRosetta,
<ligand_params> must supply the list of files including the ligand
parameters. <jobs> trajectories are performed with output
structures named <job_output>_(job#).pdb.
Note: Global docking, a problem solved by the Rosetta DockingProtocol,
requires interface detection and refinement as with other protocols,
these tasks are split into centroid (interface detection) and
high-resolution (interface refinement) methods without a centroid
representation, low-resolution ligand-protein prediction is not
possible and as such, only the high-resolution ligand-protein
interface refinement is available. If you add a perturbation or
randomization step, the high-resolution stages may fail. A perturbation
step CAN make this a global docking algorithm however the rigid-body
sampling preceding refinement requires extensive sampling to produce
accurate results and this algorithm spends most of its effort in
refinement (which may be useless for the predicted interface).
This script performs ligand-protein interface structure prediction but does NOT
perform global ligand-protein docking. Since there is no generic interface
detection, the input PDB file must have the ligand placed near the interface
that will be refined. If the DockMCMProtocol is applied to a pose
without placement near the interface, then the refinement may:
-waste steps sampling the wrong interface
-fail by predicting an incorrect interface very far from the true interface
-fail by separating the ligand from the protein (usually due to a clash)
DockMCMProtocol does not require an independent randomization or perturbation
step to "seed" its prediction.
Additional refinement steps may increase the accuracy of the predicted
conformation (see refinement.py). Drastic moves (large conformational changes)
should be avoided; if they precede the protocol, the problems above may occur,
if they succeed the protocol, the protocol results may be lost.
"""
# Declare working directory and output directory
working_dir = os.getcwd()
output_dir = "outputs"
if not os.path.exists(output_dir):
os.mkdir(output_dir)
# Initialize PyRosetta
pyrosetta.init()
# Create an empty pose from the desired PDB file
pose = pyrosetta.rosetta.core.pose.Pose()
# If the params list has contents, load .params files
# Note: this method of adding ligands to the ResidueTypeSet is unnecessary
# if you call pyrosetta.init("-extra_res_fa {}".format(ligand_params))
if len(ligand_params) != 0 and ligand_params[0] != "":
ligand_params = pyrosetta.Vector1(ligand_params)
res_set = pose.conformation().modifiable_residue_type_set_for_conf()
res_set.read_files_for_base_residue_types(ligand_params)
pose.conformation().reset_residue_type_set_for_conf(res_set)
# Load pdb_filename into pose
pyrosetta.io.pose_from_file(pose, pdb_filename)
# Setup the docking FoldTree
# the method setup_foldtree takes an input pose and sets its
# FoldTree to have jump 1 represent the relation between the two docking
# partners, the jump points are the residues closest to the centers of
# geometry for each partner with a cutpoint at the end of the chain,
# the second argument is a string specifying the relative chain orientation
# such as "A_B" of "LH_A", ONLY TWO BODY DOCKING is supported and the
# partners MUST have different chain IDs and be in the same pose (the
# same PDB), additional chains can be grouped with one of the partners,
# the "_" character specifies which bodies are separated
# the third argument...is currently unsupported but must be set (it is
# supposed to specify which jumps are movable, to support multibody
# docking...but Rosetta doesn't currently)
# the FoldTrees setup by this method are for TWO BODY docking ONLY!
dock_jump = 1 # jump number 1 is the inter-body jump
pyrosetta.rosetta.protocols.docking.setup_foldtree(pose,
partners,
pyrosetta.Vector1([dock_jump]))
# Create a copy of the pose for testing
test_pose = pose.clone()
# Create ScoreFunctions for centroid and fullatom docking
scorefxn = pyrosetta.create_score_function("ligand")
# Setup the high resolution (fullatom) docking protocol using DockMCMProtocol.
docking = pyrosetta.rosetta.protocols.docking.DockMCMProtocol()
# Many of its options and settings can be set using the setter methods.
docking.set_scorefxn(scorefxn)
# Change directory temporarily for output
os.chdir(output_dir)
# Setup the PyJobDistributor
jd = pyrosetta.toolbox.py_jobdistributor.PyJobDistributor(job_output,
jobs, scorefxn,
compress=False)
# Set the native pose so that the output scorefile contains the pose rmsd metric
jd.native_pose = pose
# Optional: setup a PyMOLObserver
# pyrosetta.rosetta.protocols.moves.AddPyMOLObserver(test_pose, True)
# Perform protein-ligand docking
# counter = 0 # for pretty output to PyMOLObserver
while not jd.job_complete:
test_pose = pose.clone() # Reset test pose to original structure
# counter += 1 # Change the pose name, for pretty output to PyMOLObserver
# test_pose.pdb_info().name(job_output + '_' + str(counter))
docking.apply(test_pose) # Perform docking and output to PyMOL
# Write the decoy structure to disc
jd.output_decoy(test_pose)
os.chdir(working_dir)
if __name__ == "__main__":
# Declare parser object for managing input options
parser = optparse.OptionParser()
parser.add_option("--pdb_filename",
dest="pdb_filename",
help="The PDB file containing the ligand and protein to dock.")
parser.add_option("--partners",
dest="partners",
default = "A_X",
help="The relative chain partners for docking.")
parser.add_option("--ligand_params",
dest="ligand_params",
help="The ligand residue parameter file.")
parser.add_option("--jobs",
dest="jobs",
default="1",
help="The number of jobs (trajectories) to perform.")
parser.add_option("--job_output",
dest="job_output",
default = "ligand_output",
help="The name preceding all output, output PDB files and scorefile.")
(options, args) = parser.parse_args()
# Catch input erros
if not options.pdb_filename:
parser.error("pdb_filename not given!")
if not options.ligand_params:
parser.error("ligand_params not given!")
# Run ligand refinement protocol
sample_ligand_interface(pdb_filename=options.pdb_filename,
partners=options.partners,
ligand_params=options.ligand_params.split(","),
jobs=int(options.jobs),
job_output=options.job_output)
###Output
_____no_output_____
###Markdown
Run `outputs/ligand_refinement_from_command_line.py` from the command line within this Jupyter Notebook!
###Code
pdb_filename = "inputs/test_lig.pdb"
params_file = "inputs/TPA.gasteiger.fa.params"
if not os.getenv("DEBUG"):
%run expected_outputs/ligand_refinement_from_command_line.py \
--pdb_filename {pdb_filename} \
--ligand_params {params_file} \
--partners E_X \
--jobs 1 \
--job_output test_lig_command_line
###Output
_____no_output_____ |
Docking_0612.ipynb | ###Markdown
Set up the environment
###Code
!nvidia-smi
#!pip install pymatgen==2020.12.31
!pip install pymatgen==2019.11.11
!pip install --pre graphdot
!pip install gdown
%matplotlib inline
import io
import sys
sys.path.append('/usr/local/lib/python3.6/site-packages/')
import os
import urllib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import graphdot
from graphdot import Graph
from graphdot.graph.adjacency import AtomicAdjacency
from graphdot.graph.reorder import rcm
from graphdot.kernel.marginalized import MarginalizedGraphKernel # https://graphdot.readthedocs.io/en/latest/apidoc/graphdot.kernel.marginalized.html
from graphdot.kernel.marginalized.starting_probability import Uniform
from graphdot.model.gaussian_process import (
GaussianProcessRegressor,
LowRankApproximateGPR
)
from graphdot.kernel.fix import Normalization
import graphdot.microkernel as uX
import ase.io
# for getting all file names into a list under a directory
from os import listdir
# for getting file names that match certain pattern
import glob
import time
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
#cd gdrive/MyDrive/Google\ Colab/Covid-Data
%cd gdrive/MyDrive/Covid-Data/
!pwd
!mkdir /content/pkls
###Output
_____no_output_____
###Markdown
load the data
###Code
files = ['uncharged_NSP15_6W01_A_3_H.Orderable_zinc_db_enaHLL.2col.csv.1.xz']
dataset = pd.read_pickle(files[0]) # length of each csv file is 100000
target = 'energy'
batch_size = 1000
batch_num_train = 0.8*len(dataset)//batch_size # batch number of training
num_test = 0.2*len(dataset)
train_data = dataset.iloc[:int(batch_num_train)*batch_size]
test_data = dataset.iloc[int(batch_num_train)*batch_size:len(dataset)]
#print(len(train_data))
#print(len(test_data))
gpr = GaussianProcessRegressor(
# kernel is the covariance function of the gaussian process (GP)
kernel=Normalization( # kernel equals to normalization -> normalizes a kernel using the cosine of angle formula, k_normalized(x,y) = k(x,y)/sqrt(k(x,x)*k(y,y))
# graphdot.kernel.fix.Normalization(kernel), set kernel as marginalized graph kernel, which is used to calculate the similarity between 2 graphs
# implement the random walk-based graph similarity kernel as Kashima, H., Tsuda, K., & Inokuchi, A. (2003). Marginalized kernels between labeled graphs. ICML
MarginalizedGraphKernel(
# node_kernel - A kernelet that computes the similarity between individual nodes
# uX - graphdot.microkernel - microkernels are positive-semidefinite functions between individual nodes and edges of graphs
node_kernel=uX.Additive( # addition of kernal matrices: sum of k_a(X_a, Y_a) cross for a in features
# uX.Constant - a kernel that returns a constant value, always mutlipled with other microkernels as an adjustable weight
# c, the first input arg. as 0.5, (0.01, 10) the lower and upper bounds of c that is allowed to vary during hyperpara. optimizartion
# uX.KroneckerDelta - a kronecker delta returns 1 when two features are equal and return h (the first input arg here, which is 0.5 in this case) otherwise
# (0.1, 0.9) the lower and upper bounds that h is allowed to vary during hyperpara. optimization
aromatic=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)), # the 2nd element of graphdot.graph.Graph.nodes
atomic_number=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)), # the 3rd element of graphdot.graph.Graph.nodes
# uX.SquareExponential - Equ. 26 in the paper
# input arg. length_sacle is a float32, set as 1 in this case, which correspond to approx. 1 of the kernal value.
# This is used to determins how quicklys should the kernel decay to zero.
charge=uX.Constant(0.5, (0.01, 10.0)) * uX.SquareExponential(1.0), # the 4th element of graphdot.graph.Graph.nodes
chiral=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)), # the 5th element of graphdot.graph.Graph.nodes
hcount=uX.Constant(0.5, (0.01, 10.0)) * uX.SquareExponential(1.0), # the 6th element of graphdot.graph.Graph.nodes
hybridization=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)), # the 7th element of graphdot.graph.Graph.nodes
# uX.Convolution - a convolutional microkernel which averages evaluations of a base microkernel between pairs pf elememts of two variable-length feature sequences
# uX.KroneckerDelta as the base kernel
ring_list=uX.Constant(0.5, (0.01, 100.0)) * uX.Convolution(uX.KroneckerDelta(0.5,(0.1, 0.9))) # the 8th element of graphdot.graph.Graph.nodes
).normalized,
# edge_kernel - A kernelet that computes the similarity between individual edge
edge_kernel=uX.Additive(
aromatic=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)), # the 3rd element of graphdot.graph.Graph.nodes
conjugated=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)), # the 4th element of graphdot.graph.Graph.nodes
order=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)), # the 5th element of graphdot.graph.Graph.nodes
ring_stereo=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)), # the 6th element of graphdot.graph.Graph.nodes
stereo=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)) # the 7th element of graphdot.graph.Graph.nodes
).normalized,
p=Uniform(1.0, p_bounds='fixed'), # the strating probability of the random walk on each node
q=0.05 # the probability for the random walk to stop during each step
)
),
alpha=1e-4, # value added to the diagonal of the kernel matrix during fitting
optimizer=True, # default optimizer of L-BFGS-B based on scipy.optimize.minimize
normalize_y=True, # normalize the y values so taht the means and variance is 0 and 1, repsectively. Will be reversed when predicions are returned
regularization='+', # alpha (1e-4 in this case) is added to the diagonals of the kernal matrix
)
def train_pipeline_batch(model, train_dataset, test_dataset, target, repeats, batch_num_train, batch_size, verbose = True, print_batch = True, print_repeat = True):
start_time = time.time()
for repeat in range(0, repeats):
for batch in range(0, int(batch_num_train)-1):
batch_dataset = dataset.iloc[batch*batch_size:(batch+1)*batch_size] # divide the training data into different batches
np.random.seed(0)
if batch == 0 and repeat == 0:
model.fit(batch_dataset.graphs, batch_dataset[target], repeat=1, verbose=verbose)
model.save(path="/content/pkls", filename='batch_0_repeat_0.pkl', overwrite=True)
elif batch == 0 and repeat !=0:
filename_load = 'batch_0_repeat_'+str(repeat-1)+'.pkl'
filename_save = 'batch_0_repeat_'+str(repeat)+'.pkl'
model.load(path="/content/pkls", filename=filename_load)
model.fit(batch_dataset.graphs, batch_dataset[target], repeat=1, verbose=verbose)
model.save(path="/content/pkls", filename=filename_save, overwrite=True)
else:
filename_load = 'batch_'+str(batch-1)+'_repeat_'+str(repeat)+'.pkl'
filename_save = 'batch_'+str(batch)+'_repeat_'+str(repeat)+'.pkl'
model.load(path="/content/pkls", filename=filename_load)
model.fit(batch_dataset.graphs, batch_dataset[target], repeat=1, verbose=verbose)
model.save(path="/content/pkls", filename=filename_save, overwrite=True)
if print_batch:
mu = gpr.predict(train_dataset.graphs)
print('Training set of repeat '+str(repeat)+' and batch '+str(batch))
print('MAE '+str(repeat)+' and batch '+str(batch)+' '+str(np.mean(np.abs(train_dataset[target] - mu))))
print('MAE '+str(repeat)+' and batch '+str(batch)+' '+str(np.std(train_dataset[target] - mu)))
if print_repeat:
mu = gpr.predict(train_dataset.graphs)
print('Training set of repeat '+str(repeat))
print('MAE '+str(repeat)+' '+str(np.mean(np.abs(train_dataset[target] - mu))))
print('RMSE '+str(repeat)+' '+str(np.std(train_dataset[target] - mu)))
mu = gpr.predict(test_dataset.graphs)
print('Test set of repeat '+str(repeat))
print('MAE '+str(repeat)+' '+str(np.mean(np.abs(test_dataset[target] - mu))))
print('RMSE '+str(repeat)+' '+str(np.std(test_dataset[target] - mu)))
end_time = time.time()
print("the total time consumption is " + str(end_time - start_time) + ".")
train_pipeline_batch(gpr, train_data, test_data, target, repeats=2, batch_num_train=batch_num_train, batch_size=batch_size, print_batch = False, print_repeat = True)
gpr.kernel.hyperparameters
mu = gpr.predict(train.graphs)
plt.scatter(train[target], mu)
plt.show()
print('Training set')
print('MAE:', np.mean(np.abs(train[target] - mu)))
print('RMSE:', np.std(train[target] - mu))
mu_test = gpr.predict(test.graphs)
plt.scatter(test[target], mu_test)
plt.show()
print('Test set')
print('MAE:', np.mean(np.abs(test[target] - mu_test)))
print('RMSE:', np.std(test[target] - mu_test))
###Output
Test set
MAE: 1.2461703788060545
RMSE: 1.6040210737240486
###Markdown
Workon the kernel. Find a kernel that trains and predicts well.
###Code
gpr2 = GaussianProcessRegressor(
kernel=Normalization(
MarginalizedGraphKernel(
node_kernel=uX.Additive(
aromatic=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),
atomic_number=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)),
charge=uX.Constant(0.5, (0.01, 10.0)) * uX.SquareExponential(1.0),
chiral=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),
hcount=uX.Constant(0.5, (0.01, 10.0)) * uX.SquareExponential(1.0),
hybridization=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),
ring_list=uX.Constant(0.5, (0.01, 100.0)) * uX.Convolution(uX.KroneckerDelta(0.5,(0.1, 0.9)))
).normalized,
edge_kernel=uX.Additive(
aromatic=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),
conjugated=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.5,(0.1, 0.9)),
order=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)),
ring_stereo=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9)),
stereo=uX.Constant(0.5, (0.01, 10.0)) * uX.KroneckerDelta(0.8,(0.1, 0.9))
).normalized,
p=Uniform(1.0, p_bounds='fixed'),
q=0.05
)
),
alpha=1e-2, #different from gpr in alpha where gpr's alpha is 1e-4
optimizer=True,
normalize_y=True,
regularization='+',
)
#gpr2.fit(train.graphs, train[target], repeat=3, verbose=True)
gpr2.fit(train.graphs, train[target], repeat=1, verbose=True)
mu = gpr2.predict(train.graphs)
plt.scatter(train[target], mu)
plt.show()
print('Training set')
print('MAE:', np.mean(np.abs(train[target] - mu)))
print('RMSE:', np.std(train[target] - mu))
mu_test = gpr2.predict(test.graphs)
plt.scatter(test[target], mu_test)
plt.show()
print('Test set')
print('MAE:', np.mean(np.abs(test[target] - mu_test)))
print('RMSE:', np.std(test[target] - mu_test))
###Output
Test set
MAE: 0.9561539409612109
RMSE: 1.2284268143181998
|
NNday_basic.ipynb | ###Markdown
AI2S Deep Learning Day - Beginners notebookAlessio Ansuini, AREA Research and TechnologyAndrea Gasparin and Marco Zullich, Artificial Intelligence Student Society PytorchPyTorch is a Python library offering extensive support for the construction of deep Neural Networks (NNs).One of the main characteristics of PyTorch is that it operates with **Tensors**, as they provide a significative speed up of the computations.For the scope of this introduction we can simply think at Tensors as arrays, with all the relative operations preserved as we can see in the following example.
###Code
import numpy as np
import torch
tensor_A = torch.tensor([1,1,1])
array_A = np.array([1,1,1])
print(tensor_A)
print(array_A)
print( 2 * tensor_A )
print( 2 * array_A )
###Output
tensor([1, 1, 1])
[1 1 1]
tensor([2, 2, 2])
[2 2 2]
###Markdown
The images representationIn our context, we will work with black and white images. They are represented as matrices containing numbers.The numbers will go from 0 (white) to the max value (black) including all the grey scale spectrum.
###Code
central_vertical_line = torch.tensor([[ 0, 4, 0],
[ 0, 8, 0],
[ 0, 10, 0]])
import matplotlib.pyplot as plt #plots and image viewer module
plt.imshow(central_vertical_line, cmap="Greys")
###Output
_____no_output_____
###Markdown
Handwritten digit recognition (MNIST dataset)In this notebook, we'll train a simple fully-connected NN for the classification of the MNIST dataset. The MNIST (*modified National Institute of Standards and Technology database*) is a collection of 28x28 pixels black and white images containing handwritten digits. Let's see an example:
###Code
import torchvision #the module where is stored the dataset
#to improve training efficiency, data are first normalised. The "transform" method will do the job for us
transform = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.1307,), (0.3081,)),
])
trainset = torchvision.datasets.MNIST(root="./data", train=True, transform=transform, download=True)
testset = torchvision.datasets.MNIST(root="./data", train=False, transform=transform, download=True)
###Output
_____no_output_____
###Markdown
**trainset.data** contains the images, represented as 28x28 matrixes of float numbers**trainset.target** contains the labels, so the numbers represented in the images
###Code
print("trainset.data[0] is the first image; its size is:", trainset.data[0].shape)
print("the digit represented is the number: ", trainset.targets[0])
# if we have a tensor composed of a single scalar, we can extract the scalar via tensor.item()
print("scalar representation: ", trainset.targets[0].item())
###Output
trainset.data[0] is the first image; its size is: torch.Size([28, 28])
the digit represented is the number: tensor(5)
scalar representation: 5
###Markdown
Let's see that the image actually shows the number 5
###Code
print(trainset.data[0][6])
plt.imshow(trainset.data[0], cmap='Greys')
###Output
tensor([ 0, 0, 0, 0, 0, 0, 0, 0, 30, 36, 94, 154, 170, 253,
253, 253, 253, 253, 225, 172, 253, 242, 195, 64, 0, 0, 0, 0],
dtype=torch.uint8)
###Markdown
THE TRAININGFirst we need to separate the images and the labels
###Code
train_imgs = trainset.data
train_labels = trainset.targets
test_imgs = testset.data
test_labels = testset.targets
###Output
_____no_output_____
###Markdown
Flatten the imageTo simplify the network flow, images are initially flattened, meaning that the corresponding matrix will be transformed in a single longer row array:
###Code
central_vertical_line_flattened = central_vertical_line.flatten()
print("initial matrix:\n",central_vertical_line)
print("\nmatrix flattened:\n",central_vertical_line_flattened)
print("\nmatrix shape:",central_vertical_line.shape, " flattened shape:", central_vertical_line_flattened.shape)
###Output
initial matrix:
tensor([[ 0, 4, 0],
[ 0, 8, 0],
[ 0, 10, 0]])
matrix flattened:
tensor([ 0, 4, 0, 0, 8, 0, 0, 10, 0])
matrix shape: torch.Size([3, 3]) flattened shape: torch.Size([9])
###Markdown
Creating the NNWe create the NN as in the image below:* the **input layer** has 784 neurons: this as the images have 28x28=784 numbers;* there are three **hidden layers**: the first one has 16 neurons, the second one has 32, the first one has 16 again;* the **output layer** has 10 neurons, one per class.The NN can be easily created using the `torch.nn.Sequential` method, which allows for the construction of the NN by pipelining the building blocks in a list and passing it to the Sequential constructor.We pass to Sequential the following elements:* we start with a `Flatten()` module since we need to flatten the 2D 28x28 images into the 784 elements 1D array* we alternate `Linear` layers (fully-connected layers) with `ReLU` modules (Rectified Linear Unit) activation functions* we conclude with a `Linear` layer withoud activation function: this will output, for each image, an array of 10 scalars, each one indicating the "confidence" that the network has in assigning the input image to the corresponding class. We'll assign the image to the class having the highest confidence.After this, the architecture of the NN is complete! We will then focus on telling Python how to train this NN.
###Code
from torch import nn
inputDimension = 784
outputDimension = 10 # the number of classes - 10 digits from 0 to 9
layersWidth = 16
network = nn.Sequential(
nn.Flatten(),
nn.Linear(inputDimension, layersWidth),
nn.ReLU(),
nn.Linear(layersWidth, layersWidth*2),
nn.ReLU(),
nn.Linear(layersWidth*2, layersWidth),
nn.ReLU(),
nn.Linear(layersWidth, outputDimension),
)
###Output
_____no_output_____
###Markdown
NN trainingWe'll use vanilla mini-batch Stochastic Gradient Descent (SGD) with a learning rate of *learningRate* (you chose!!!) as the optimizer.We'll create mini-batches of size *batchSize* (i.e., we'll have 60000/*batchSize*=600 mini-batches containing our data) for the training.We'll train the NN for *epochs* epochs, each epoch indicating how many times the NN "sees" the whole dataset during training.The loss function we'll use is the **categorical cross-entropy** (particularly useful for non-binary classification problems) and we'll also evaluate the network on its **accuracy** (i.e., images correctly classified divided by total images). *learningRate*, *batchSize*, and *epochs* are parameters you can play with, let's see haw you can improve the accuracy!!!
###Code
#hyper parameters
batchSize = 100
learningRate = 0.1
epochs = 3
###Output
_____no_output_____
###Markdown
In order to pass our data to the network, we'll make use of DataLoaders: they take care of subdividing the dataset into mini-batches, applying the requested transformations, and optionally re-shuffling them at the beginning of each new epoch.
###Code
trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False)
###Output
_____no_output_____
###Markdown
We also provide a function to compute the accuracy of the nn given its outputs and the true values of the images they are trying to classify
###Code
def calculate_accuracy(nn_output, true_values):
class_prediction = nn_output.topk(1).indices.flatten()
match = (class_prediction == true_values)
correctly_classified = match.sum().item()
accuracy = correctly_classified / nn_output.size(0)
return accuracy
###Output
_____no_output_____
###Markdown
Let's check that it works for a fictitious batch of 4 images and 3 classes.A NN output in this case will be a matrix of shape 4x3, each row holding the probability that the model assigns the corresponding image to the corresponding class.We create a fake ground truth s.t. the NN assigns correctly the first 3 images: the corresponding accuracy should then be 3/4=0.75 Here the actual traininig
###Code
lossValues = [] #to store the loss value trand during the training (we want it to DECREASE as much as possible)
accuracy = [] #to store the accuracy trand during the training (we want it to INCREASE as much as possible)
lossFunction = torch.nn.CrossEntropyLoss() #the error function the nn is trying to minimise
network.train() #this tells our nn that it is in training mode.
optimizer = torch.optim.SGD(network.parameters(), lr=learningRate) #the kind of optimiser we want of our nn to use
# MAIN LOOP: one iteration for each epoch
for e in range(epochs):
# INNER LOOP: one for each MINI-BATCH
for i, (imgs, ground_truth) in enumerate(trainloader): #range(num_of_batches):
optimizer.zero_grad() # VERY TECHNICAL needed in order NOT to accumulate gradients on top of the previous epochs
predictions = network(imgs)
loss = lossFunction(predictions, ground_truth)
loss.backward()
optimizer.step()
accuracy_batch = calculate_accuracy(predictions, ground_truth)
lossValues.append(loss.item())
accuracy.append(accuracy_batch)
# Every 200 iterations, we print the status of loss and accuracy
if (i+1)%200 == 0:
print(f"***Epoch {e+1} | Iteration {i+1} | Mini-batch loss {loss.item()} | Mini-batch accuracy {accuracy_batch}")
# Let us draw the charts for loss and accuracy for each training iteration
plt.plot(lossValues, label="loss")
plt.plot(accuracy, label="accuracy")
plt.legend()
###Output
_____no_output_____
###Markdown
Check yourselfHere we provide a function to pick a few images from the test set and check if the network classifies them properly
###Code
def classify():
for i in range(5):
num = np.random.randint(0,test_imgs.shape[0])
network.eval()
plt.imshow(test_imgs[num])
plt.show()
print("Our network classifies this image as: ", network(test_imgs[num:num+1].float()).topk(1).indices.flatten().item())
print("The true value is: ", test_labels[num:num+1].item())
print("\n\n")
classify()
###Output
_____no_output_____
###Markdown
AI2S Deep Learning Day - Beginners notebookAlessio Ansuini, AREA Research and TechnologyAndrea Gasparin and Marco Zullich, Artificial Intelligence Student Society PytorchPyTorch is a Python library offering extensive support for the construction of deep Neural Networks (NNs).One of the main characteristics of PyTorch is that it operates with **Tensors**, as they provide a significative speed up of the computations.For the scope of this introduction we can simply think at Tensors as arrays, with all the relative operations preserved as we can see in the following example.
###Code
import torch
import numpy as np
tensor_A = torch.tensor([1,1,1])
array_A = np.array([1,1,1])
print(tensor_A)
print(array_A)
print( 2 * tensor_A )
print( 2 * array_A )
###Output
tensor([1, 1, 1])
[1 1 1]
tensor([2, 2, 2])
[2 2 2]
###Markdown
The images representationIn our context, we will work with black and white images. They are represented as matrices containing numbers.The numbers will go from 0 (white) to the max value (black) including all the grey scale spectrum.
###Code
central_vertical_line = torch.tensor([[ 0, 4, 0],
[ 0, 8, 0],
[ 0, 10, 0]])
import matplotlib.pyplot as plt #plots and image viewer module
plt.imshow(central_vertical_line, cmap="Greys")
###Output
_____no_output_____
###Markdown
Handwritten digit recognition (MNIST dataset)In this notebook, we'll train a simple fully-connected NN for the classification of the MNIST dataset. The MNIST (*modified National Institute of Standards and Technology database*) is a collection of 28x28 pixels black and white images containing handwritten digits. Let's see an example:
###Code
import torchvision #the module where is stored the dataset
#to improve training efficiency, data are first normalised. The "transform" method will do the job for us
transform = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.1307,), (0.3081,)),
])
trainset = torchvision.datasets.MNIST(root="./data", train=True, transform=transform, download=True)
testset = torchvision.datasets.MNIST(root="./data", train=False, transform=transform, download=True)
###Output
_____no_output_____
###Markdown
**trainset.data** contains the images, represented as 28x28 matrixes of float numbers**trainset.target** contains the labels, so the numbers represented in the images
###Code
print("trainset.data[0] is the first image; its size is:", trainset.data[0].shape)
print("the digit represented is the number: ", trainset.targets[0])
# if we have a tensor composed of a single scalar, we can extract the scalar via tensor.item()
print("scalar representation: ", trainset.targets[0].item())
###Output
trainset.data[0] is the first image; its size is: torch.Size([28, 28])
the digit represented is the number: tensor(5)
scalar representation: 5
###Markdown
Let's see that the image actually shows the number 5
###Code
print(trainset.data[0][6])
plt.imshow(trainset.data[0], cmap='Greys')
###Output
tensor([ 0, 0, 0, 0, 0, 0, 0, 0, 30, 36, 94, 154, 170, 253,
253, 253, 253, 253, 225, 172, 253, 242, 195, 64, 0, 0, 0, 0],
dtype=torch.uint8)
###Markdown
THE TRAININGFirst we need to separate the images and the labels
###Code
train_imgs = trainset.data
train_labels = trainset.targets
test_imgs = testset.data
test_labels = testset.targets
###Output
_____no_output_____
###Markdown
Flatten the imageTo simplify the network flow, images are initially flattened, meaning that the corresponding matrix will be transformed in a single longer row array:
###Code
central_vertical_line_flattened = central_vertical_line.flatten()
print("initial matrix:\n",central_vertical_line)
print("\nmatrix flattened:\n",central_vertical_line_flattened)
print("\nmatrix shape:",central_vertical_line.shape, " flattened shape:", central_vertical_line_flattened.shape)
###Output
initial matrix:
tensor([[ 0, 4, 0],
[ 0, 8, 0],
[ 0, 10, 0]])
matrix flattened:
tensor([ 0, 4, 0, 0, 8, 0, 0, 10, 0])
matrix shape: torch.Size([3, 3]) flattened shape: torch.Size([9])
###Markdown
Creating the NNWe create the NN as in the image below:* the **input layer** has 784 neurons: this as the images have 28x28=784 numbers;* there are three **hidden layers**: the first one has 16 neurons, the second one has 32, the first one has 16 again;* the **output layer** has 10 neurons, one per class.The NN can be easily created using the `torch.nn.Sequential` method, which allows for the construction of the NN by pipelining the building blocks in a list and passing it to the Sequential constructor.We pass to Sequential the following elements:* we start with a `Flatten()` module since we need to flatten the 2D 28x28 images into the 784 elements 1D array* we alternate `Linear` layers (fully-connected layers) with `ReLU` modules (Rectified Linear Unit) activation functions* we conclude with a `Linear` layer withoud activation function: this will output, for each image, an array of 10 scalars, each one indicating the "confidence" that the network has in assigning the input image to the corresponding class. We'll assign the image to the class having the highest confidence.After this, the architecture of the NN is complete! We will then focus on telling Python how to train this NN.
###Code
from torch import nn
inputDimension = 784
outputDimension = 10 # the number of classes - 10 digits from 0 to 9
layersWidth = 16
network = nn.Sequential(
nn.Flatten(),
nn.Linear(inputDimension, layersWidth),
nn.ReLU(),
nn.Linear(layersWidth, layersWidth*2),
nn.ReLU(),
nn.Linear(layersWidth*2, layersWidth),
nn.ReLU(),
nn.Linear(layersWidth, outputDimension),
)
###Output
_____no_output_____
###Markdown
NN trainingWe'll use vanilla mini-batch Stochastic Gradient Descent (SGD) with a learning rate of *learningRate* (you chose!!!) as the optimizer.We'll create mini-batches of size *batchSize* (i.e., we'll have 60000/*batchSize*=600 mini-batches containing our data) for the training.We'll train the NN for *epochs* epochs, each epoch indicating how many times the NN "sees" the whole dataset during training.The loss function we'll use is the **categorical cross-entropy** (particularly useful for non-binary classification problems) and we'll also evaluate the network on its **accuracy** (i.e., images correctly classified divided by total images). *learningRate*, *batchSize*, and *epochs* are parameters you can play with, let's see haw you can improve the accuracy!!!
###Code
#hyper parameters
batchSize = 100
learningRate = 0.1
epochs = 3
###Output
_____no_output_____
###Markdown
In order to pass our data to the network, we'll make use of DataLoaders: they take care of subdividing the dataset into mini-batches, applying the requested transformations, and optionally re-shuffling them at the beginning of each new epoch.
###Code
trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False)
###Output
_____no_output_____
###Markdown
We also provide a function to compute the accuracy of the nn given its outputs and the true values of the images they are trying to classify
###Code
def calculate_accuracy(nn_output, true_values):
class_prediction = nn_output.topk(1).indices.flatten()
match = (class_prediction == true_values)
correctly_classified = match.sum().item()
accuracy = correctly_classified / nn_output.size(0)
return accuracy
###Output
_____no_output_____
###Markdown
Let's check that it works for a fictitious batch of 4 images and 3 classes.A NN output in this case will be a matrix of shape 4x3, each row holding the probability that the model assigns the corresponding image to the corresponding class.We create a fake ground truth s.t. the NN assigns correctly the first 3 images: the corresponding accuracy should then be 3/4=0.75 Here the actual traininig
###Code
lossValues = [] #to store the loss value trand during the training (we want it to DECREASE as much as possible)
accuracy = [] #to store the accuracy trand during the training (we want it to INCREASE as much as possible)
lossFunction = torch.nn.CrossEntropyLoss() #the error function the nn is trying to minimise
network.train() #this tells our nn that it is in training mode.
optimizer = torch.optim.SGD(network.parameters(), lr=learningRate) #the kind of optimiser we want of our nn to use
# MAIN LOOP: one iteration for each epoch
for e in range(epochs):
# INNER LOOP: one for each MINI-BATCH
for i, (imgs, ground_truth) in enumerate(trainloader): #range(num_of_batches):
optimizer.zero_grad() # VERY TECHNICAL needed in order NOT to accumulate gradients on top of the previous epochs
predictions = network(imgs)
loss = lossFunction(predictions, ground_truth)
loss.backward()
optimizer.step()
accuracy_batch = calculate_accuracy(predictions, ground_truth)
lossValues.append(loss.item())
accuracy.append(accuracy_batch)
# Every 200 iterations, we print the status of loss and accuracy
if (i+1)%200 == 0:
print(f"***Epoch {e+1} | Iteration {i+1} | Mini-batch loss {loss.item()} | Mini-batch accuracy {accuracy_batch}")
# Let us draw the charts for loss and accuracy for each training iteration
plt.plot(lossValues, label="loss")
plt.plot(accuracy, label="accuracy")
plt.legend()
###Output
_____no_output_____
###Markdown
Check yourselfHere we provide a function to pick a few images from the test set and check if the network classifies them properly
###Code
def classify():
for i in range(5):
num = np.random.randint(0,test_imgs.shape[0])
network.eval()
plt.imshow(test_imgs[num])
plt.show()
print("Our network classifies this image as: ", network(test_imgs[num:num+1].float()).topk(1).indices.flatten().item())
print("The true value is: ", test_labels[num:num+1].item())
print("\n\n")
classify()
###Output
_____no_output_____ |
Data Science With Python/12 - Project - House Price.ipynb | ###Markdown
Project - Parameters with Highest Impact on House Prices ![Data Science Workflow](img/ds-workflow.png) Goal of Project- A real estate dealer wants to figure out what matters most when selling a house- They provide various sales data- Your job is to figure out which 10 parameters (features) matter the most and present the findings Step 1: Acquire- Explore problem- Identify data- Import data Step 1.a: Import libraries- Execute the cell below (SHIFT + ENTER)- NOTE: You might need to install mlxtend, if so, run the following in a cell```!pip install mlxtend```
###Code
import pandas as pd
from sklearn.feature_selection import VarianceThreshold
from sklearn.model_selection import train_test_split
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
###Output
_____no_output_____ |
notebooks/01_NGS/Working_with_VCF.ipynb | ###Markdown
Getting the necessary data You just need to do this only once
###Code
!rm -f genotypes.vcf.gz 2>/dev/null
!tabix -fh ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/release/20130502/supporting/vcf_with_sample_level_annotation/ALL.chr22.phase3_shapeit2_mvncall_integrated_v5_extra_anno.20130502.genotypes.vcf.gz 22:1-17000000|bgzip -c > genotypes.vcf.gz
!tabix -p vcf genotypes.vcf.gz
from collections import defaultdict
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
import vcf
v = vcf.Reader(filename='genotypes.vcf.gz')
print('Variant Level information')
infos = v.infos
for info in infos:
print(info)
print('Sample Level information')
fmts = v.formats
for fmt in fmts:
print(fmt)
v = vcf.Reader(filename='genotypes.vcf.gz')
rec = next(v)
print(rec.CHROM, rec.POS, rec.ID, rec.REF, rec.ALT, rec.QUAL, rec.FILTER)
print(rec.INFO)
print(rec.FORMAT)
samples = rec.samples
print(len(samples))
sample = samples[0]
print(sample.called, sample.gt_alleles, sample.is_het, sample.is_variant, sample.phased)
print(int(sample['DP']))
f = vcf.Reader(filename='genotypes.vcf.gz')
my_type = defaultdict(int)
num_alts = defaultdict(int)
for rec in f:
my_type[rec.var_type, rec.var_subtype] += 1
if rec.is_snp:
num_alts[len(rec.ALT)] += 1
print(my_type)
print(num_alts)
f = vcf.Reader(filename='genotypes.vcf.gz')
sample_dp = defaultdict(int)
for rec in f:
if not rec.is_snp or len(rec.ALT) != 1:
continue
for sample in rec.samples:
dp = sample['DP']
if dp is None:
dp = 0
dp = int(dp)
sample_dp[dp] += 1
dps = sample_dp.keys()
dps.sort()
dp_dist = [sample_dp[x] for x in dps]
fig, ax = plt.subplots(figsize=(16, 9))
ax.plot(dp_dist[:50], 'r')
ax.axvline(dp_dist.index(max(dp_dist)))
###Output
_____no_output_____ |
site/tr/r1/tutorials/eager/automatic_differentiation.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Otomatik degisim ve egim banti Run in Google Colab View source on GitHub Bir onceki egitim kitapciginda 'Tensor'lari ve onlar ustunde kullanabileceginiz operasyonlari tanittik. Bu kitapcikta ise makine ogrenmesi modellerinin eniyilenmesinde onemli bir teknik olan [otomatik degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) ogrenecegiz. Kurulum
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Egim bantlariTensorFlow'un [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API'si otomatik degisim yani girdi degiskenlerine bagli olarak hesaplanan egimin hesaplanisini hali hazirda bize saglar. Tensorflow `tf.GradientTape` kapsaminda yapilan butun operasyonlari bir "tape(bant)"e "kaydeder". Tensorflow daha sonra "kaydedilmis" egimleri, bu bant ve her bir kayitla iliskili egim verilerini [ters mod degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) kullanarak hesaplar.Ornegin:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Orjinal girdi tensoru x'e gore z'nin turevi
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
Ayrica "kaydedilmis" 'tf.GradientTape' kapsaminda hesaplanan ara degerlere gore ciktilari egimini de isteyebilirsiniz.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Banti kullanarak ara deger y'ye gore z'nin turevini hesaplayabiliriz.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
GradientTape.gradient() yontemini cagirdimizda GradientTape tarafindan tutulan kaynaklar serbest birakilir. Ayni degerleri kullanarak birden fazla egim hesaplamak istiyorsaniz 'persistent(kalici)' egim banti olusturmalisiniz. Bu sayede bant nesnesi cop toplayicisi tarafindan toplanip kaynaklar serbest birakildikca 'gradient()' yontemini bircok kere cagirmamiza izin verir. Ornegin:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Referansi banta indirgeyelim
###Output
_____no_output_____
###Markdown
Kontrol akimini kaydedelimBantlar operasyonlar yurutuldukce kaydettigi icin, Python kontrol akimlari (`if`ler ve `while`lar gibi) dogal olarak islenir:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Yuksek-sirali egimler`GradientTape` kapsam yoneticisindeki operasyonlar otomatik degisim icin kaydedilir. Eger egimler bu kapsamda hesaplandiysa onlar da ayni sekilde kaydedilir. Sonuc olarak, ayni API'yi kullanarak yuksek-sirali egimleri hesaplayabiliriz. Ornegin:
###Code
x = tf.Variable(1.0) # 1.0 degerine ilklenmis bir Tensorflow degiskeni olusturalim
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# 't' kapsam yoneticisi icerisinde egimi hesaplayalim
# ki bu egim hesaplanmasinin turevlenebilir oldugu anlamina gelir.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Otomatik degisim ve egim banti Run in Google Colab View source on GitHub Bir onceki egitim kitapciginda 'Tensor'lari ve onlar ustunde kullanabileceginiz operasyonlari tanittik. Bu kitapcikta ise makine ogrenmesi modellerinin eniyilenmesinde onemli bir teknik olan [otomatik degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) ogrenecegiz. Kurulum
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Egim bantlariTensorFlow'un [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API'si otomatik degisim yani girdi degiskenlerine bagli olarak hesaplanan egimin hesaplanisini hali hazirda bize saglar. Tensorflow `tf.GradientTape` kapsaminda yapilan butun operasyonlari bir "tape(bant)"e "kaydeder". Tensorflow daha sonra "kaydedilmis" egimleri, bu bant ve her bir kayitla iliskili egim verilerini [ters mod degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) kullanarak hesaplar.Ornegin:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Orjinal girdi tensoru x'e gore z'nin turevi
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
Ayrica "kaydedilmis" 'tf.GradientTape' kapsaminda hesaplanan ara degerlere gore ciktilari egimini de isteyebilirsiniz.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Banti kullanarak ara deger y'ye gore z'nin turevini hesaplayabiliriz.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
GradientTape.gradient() yontemini cagirdimizda GradientTape tarafindan tutulan kaynaklar serbest birakilir. Ayni degerleri kullanarak birden fazla egim hesaplamak istiyorsaniz 'persistent(kalici)' egim banti olusturmalisiniz. Bu sayede bant nesnesi cop toplayicisi tarafindan toplanip kaynaklar serbest birakildikca 'gradient()' yontemini bircok kere cagirmamiza izin verir. Ornegin:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Referansi banta indirgeyelim
###Output
_____no_output_____
###Markdown
Kontrol akimini kaydedelimBantlar operasyonlar yurutuldukce kaydettigi icin, Python kontrol akimlari (`if`ler ve `while`lar gibi) dogal olarak islenir:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Yuksek-sirali egimler`GradientTape` kapsam yoneticisindeki operasyonlar otomatik degisim icin kaydedilir. Eger egimler bu kapsamda hesaplandiysa onlar da ayni sekilde kaydedilir. Sonuc olarak, ayni API'yi kullanarak yuksek-sirali egimleri hesaplayabiliriz. Ornegin:
###Code
x = tf.Variable(1.0) # 1.0 degerine ilklenmis bir Tensorflow degiskeni olusturalim
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# 't' kapsam yoneticisi icerisinde egimi hesaplayalim
# ki bu egim hesaplanmasinin turevlenebilir oldugu anlamina gelir.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Otomatik degisim ve egim banti Run in Google Colab View source on GitHub Bir onceki egitim kitapciginda 'Tensor'lari ve onlar ustunde kullanabileceginiz operasyonlari tanittik. Bu kitapcikta ise makine ogrenmesi modellerinin eniyilenmesinde onemli bir teknik olan [otomatik degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) ogrenecegiz. Kurulum
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Egim bantlariTensorFlow'un [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API'si otomatik degisim yani girdi degiskenlerine bagli olarak hesaplanan egimin hesaplanisini hali hazirda bize saglar. Tensorflow `tf.GradientTape` kapsaminda yapilan butun operasyonlari bir "tape(bant)"e "kaydeder". Tensorflow daha sonra "kaydedilmis" egimleri, bu bant ve her bir kayitla iliskili egim verilerini [ters mod degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) kullanarak hesaplar.Ornegin:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Orjinal girdi tensoru x'e gore z'nin turevi
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
Ayrica "kaydedilmis" 'tf.GradientTape' kapsaminda hesaplanan ara degerlere gore ciktilari egimini de isteyebilirsiniz.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Banti kullanarak ara deger y'ye gore z'nin turevini hesaplayabiliriz.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
GradientTape.gradient() yontemini cagirdimizda GradientTape tarafindan tutulan kaynaklar serbest birakilir. Ayni degerleri kullanarak birden fazla egim hesaplamak istiyorsaniz 'persistent(kalici)' egim banti olusturmalisiniz. Bu sayede bant nesnesi cop toplayicisi tarafindan toplanip kaynaklar serbest birakildikca 'gradient()' yontemini bircok kere cagirmamiza izin verir. Ornegin:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Referansi banta indirgeyelim
###Output
_____no_output_____
###Markdown
Kontrol akimini kaydedelimBantlar operasyonlar yurutuldukce kaydettigi icin, Python kontrol akimlari (`if`ler ve `while`lar gibi) dogal olarak islenir:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Yuksek-sirali egimler`GradientTape` kapsam yoneticisindeki operasyonlar otomatik degisim icin kaydedilir. Eger egimler bu kapsamda hesaplandiysa onlar da ayni sekilde kaydedilir. Sonuc olarak, ayni API'yi kullanarak yuksek-sirali egimleri hesaplayabiliriz. Ornegin:
###Code
x = tf.Variable(1.0) # 1.0 degerine ilklenmis bir Tensorflow degiskeni olusturalim
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# 't' kapsam yoneticisi icerisinde egimi hesaplayalim
# ki bu egim hesaplanmasinin turevlenebilir oldugu anlamina gelir.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Otomatik degisim ve egim banti Run in Google Colab View source on GitHub Bir onceki egitim kitapciginda 'Tensor'lari ve onlar ustunde kullanabileceginiz operasyonlari tanittik. Bu kitapcikta ise makine ogrenmesi modellerinin eniyilenmesinde onemli bir teknik olan [otomatik degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) ogrenecegiz. Kurulum
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Egim bantlariTensorFlow'un [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API'si otomatik degisim yani girdi degiskenlerine bagli olarak hesaplanan egimin hesaplanisini hali hazirda bize saglar. Tensorflow `tf.GradientTape` kapsaminda yapilan butun operasyonlari bir "tape(bant)"e "kaydeder". Tensorflow daha sonra "kaydedilmis" egimleri, bu bant ve her bir kayitla iliskili egim verilerini [ters mod degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) kullanarak hesaplar.Ornegin:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Orjinal girdi tensoru x'e gore z'nin turevi
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
Ayrica "kaydedilmis" 'tf.GradientTape' kapsaminda hesaplanan ara degerlere gore ciktilari egimini de isteyebilirsiniz.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Banti kullanarak ara deger y'ye gore z'nin turevini hesaplayabiliriz.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
GradientTape.gradient() yontemini cagirdimizda GradientTape tarafindan tutulan kaynaklar serbest birakilir. Ayni degerleri kullanarak birden fazla egim hesaplamak istiyorsaniz 'persistent(kalici)' egim banti olusturmalisiniz. Bu sayede bant nesnesi cop toplayicisi tarafindan toplanip kaynaklar serbest birakildikca 'gradient()' yontemini bircok kere cagirmamiza izin verir. Ornegin:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Referansi banta indirgeyelim
###Output
_____no_output_____
###Markdown
Kontrol akimini kaydedelimBantlar operasyonlar yurutuldukce kaydettigi icin, Python kontrol akimlari (`if`ler ve `while`lar gibi) dogal olarak islenir:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Yuksek-sirali egimler`GradientTape` kapsam yoneticisindeki operasyonlar otomatik degisim icin kaydedilir. Eger egimler bu kapsamda hesaplandiysa onlar da ayni sekilde kaydedilir. Sonuc olarak, ayni API'yi kullanarak yuksek-sirali egimleri hesaplayabiliriz. Ornegin:
###Code
x = tf.Variable(1.0) # 1.0 degerine ilklenmis bir Tensorflow degiskeni olusturalim
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# 't' kapsam yoneticisi icerisinde egimi hesaplayalim
# ki bu egim hesaplanmasinin turevlenebilir oldugu anlamina gelir.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Otomatik degisim ve egim banti Run in Google Colab View source on GitHub Bir onceki egitim kitapciginda 'Tensor'lari ve onlar ustunde kullanabileceginiz operasyonlari tanittik. Bu kitapcikta ise makine ogrenmesi modellerinin eniyilenmesinde onemli bir teknik olan [otomatik degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) ogrenecegiz. Kurulum
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Egim bantlariTensorFlow'un [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API'si otomatik degisim yani girdi degiskenlerine bagli olarak hesaplanan egimin hesaplanisini hali hazirda bize saglar. Tensorflow `tf.GradientTape` kapsaminda yapilan butun operasyonlari bir "tape(bant)"e "kaydeder". Tensorflow daha sonra "kaydedilmis" egimleri, bu bant ve her bir kayitla iliskili egim verilerini [ters mod degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) kullanarak hesaplar.Ornegin:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Orjinal girdi tensoru x'e gore z'nin turevi
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
Ayrica "kaydedilmis" 'tf.GradientTape' kapsaminda hesaplanan ara degerlere gore ciktilari egimini de isteyebilirsiniz.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Banti kullanarak ara deger y'ye gore z'nin turevini hesaplayabiliriz.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
GradientTape.gradient() yontemini cagirdimizda GradientTape tarafindan tutulan kaynaklar serbest birakilir. Ayni degerleri kullanarak birden fazla egim hesaplamak istiyorsaniz 'persistent(kalici)' egim banti olusturmalisiniz. Bu sayede bant nesnesi cop toplayicisi tarafindan toplanip kaynaklar serbest birakildikca 'gradient()' yontemini bircok kere cagirmamiza izin verir. Ornegin:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Referansi banta indirgeyelim
###Output
_____no_output_____
###Markdown
Kontrol akimini kaydedelimBantlar operasyonlar yurutuldukce kaydettigi icin, Python kontrol akimlari (`if`ler ve `while`lar gibi) dogal olarak islenir:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Yuksek-sirali egimler`GradientTape` kapsam yoneticisindeki operasyonlar otomatik degisim icin kaydedilir. Eger egimler bu kapsamda hesaplandiysa onlar da ayni sekilde kaydedilir. Sonuc olarak, ayni API'yi kullanarak yuksek-sirali egimleri hesaplayabiliriz. Ornegin:
###Code
x = tf.Variable(1.0) # 1.0 degerine ilklenmis bir Tensorflow degiskeni olusturalim
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# 't' kapsam yoneticisi icerisinde egimi hesaplayalim
# ki bu egim hesaplanmasinin turevlenebilir oldugu anlamina gelir.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Otomatik degisim ve egim banti Run in Google Colab View source on GitHub Bir onceki egitim kitapciginda 'Tensor'lari ve onlar ustunde kullanabileceginiz operasyonlari tanittik. Bu kitapcikta ise makine ogrenmesi modellerinin eniyilenmesinde onemli bir teknik olan [otomatik degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) ogrenecegiz. Kurulum
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Egim bantlariTensorFlow'un [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API'si otomatik degisim yani girdi degiskenlerine bagli olarak hesaplanan egimin hesaplanisini hali hazirda bize saglar. Tensorflow `tf.GradientTape` kapsaminda yapilan butun operasyonlari bir "tape(bant)"e "kaydeder". Tensorflow daha sonra "kaydedilmis" egimleri, bu bant ve her bir kayitla iliskili egim verilerini [ters mod degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) kullanarak hesaplar.Ornegin:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Orjinal girdi tensoru x'e gore z'nin turevi
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
Ayrica "kaydedilmis" 'tf.GradientTape' kapsaminda hesaplanan ara degerlere gore ciktilari egimini de isteyebilirsiniz.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Banti kullanarak ara deger y'ye gore z'nin turevini hesaplayabiliriz.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
GradientTape.gradient() yontemini cagirdimizda GradientTape tarafindan tutulan kaynaklar serbest birakilir. Ayni degerleri kullanarak birden fazla egim hesaplamak istiyorsaniz 'persistent(kalici)' egim banti olusturmalisiniz. Bu sayede bant nesnesi cop toplayicisi tarafindan toplanip kaynaklar serbest birakildikca 'gradient()' yontemini bircok kere cagirmamiza izin verir. Ornegin:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Referansi banta indirgeyelim
###Output
_____no_output_____
###Markdown
Kontrol akimini kaydedelimBantlar operasyonlar yurutuldukce kaydettigi icin, Python kontrol akimlari (`if`ler ve `while`lar gibi) dogal olarak islenir:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Yuksek-sirali egimler`GradientTape` kapsam yoneticisindeki operasyonlar otomatik degisim icin kaydedilir. Eger egimler bu kapsamda hesaplandiysa onlar da ayni sekilde kaydedilir. Sonuc olarak, ayni API'yi kullanarak yuksek-sirali egimleri hesaplayabiliriz. Ornegin:
###Code
x = tf.Variable(1.0) # 1.0 degerine ilklenmis bir Tensorflow degiskeni olusturalim
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# 't' kapsam yoneticisi icerisinde egimi hesaplayalim
# ki bu egim hesaplanmasinin turevlenebilir oldugu anlamina gelir.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Otomatik degisim ve egim banti Run in Google Colab View source on GitHub Bir onceki egitim kitapciginda 'Tensor'lari ve onlar ustunde kullanabileceginiz operasyonlari tanittik. Bu kitapcikta ise makine ogrenmesi modellerinin eniyilenmesinde onemli bir teknik olan [otomatik degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) ogrenecegiz. Kurulum
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
tf.enable_eager_execution()
###Output
_____no_output_____
###Markdown
Egim bantlariTensorFlow'un [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API'si otomatik degisim yani girdi degiskenlerine bagli olarak hesaplanan egimin hesaplanisini hali hazirda bize saglar. Tensorflow `tf.GradientTape` kapsaminda yapilan butun operasyonlari bir "tape(bant)"e "kaydeder". Tensorflow daha sonra "kaydedilmis" egimleri, bu bant ve her bir kayitla iliskili egim verilerini [ters mod degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) kullanarak hesaplar.Ornegin:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Orjinal girdi tensoru x'e gore z'nin turevi
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
Ayrica "kaydedilmis" 'tf.GradientTape' kapsaminda hesaplanan ara degerlere gore ciktilari egimini de isteyebilirsiniz.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Banti kullanarak ara deger y'ye gore z'nin turevini hesaplayabiliriz.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
GradientTape.gradient() yontemini cagirdimizda GradientTape tarafindan tutulan kaynaklar serbest birakilir. Ayni degerleri kullanarak birden fazla egim hesaplamak istiyorsaniz 'persistent(kalici)' egim banti olusturmalisiniz. Bu sayede bant nesnesi cop toplayicisi tarafindan toplanip kaynaklar serbest birakildikca 'gradient()' yontemini bircok kere cagirmamiza izin verir. Ornegin:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Referansi banta indirgeyelim
###Output
_____no_output_____
###Markdown
Kontrol akimini kaydedelimBantlar operasyonlar yurutuldukce kaydettigi icin, Python kontrol akimlari (`if`ler ve `while`lar gibi) dogal olarak islenir:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Yuksek-sirali egimler`GradientTape` kapsam yoneticisindeki operasyonlar otomatik degisim icin kaydedilir. Eger egimler bu kapsamda hesaplandiysa onlar da ayni sekilde kaydedilir. Sonuc olarak, ayni API'yi kullanarak yuksek-sirali egimleri hesaplayabiliriz. Ornegin:
###Code
x = tf.Variable(1.0) # 1.0 degerine ilklenmis bir Tensorflow degiskeni olusturalim
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# 't' kapsam yoneticisi icerisinde egimi hesaplayalim
# ki bu egim hesaplanmasinin turevlenebilir oldugu anlamina gelir.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Otomatik degisim ve egim banti Run in Google Colab View source on GitHub Bir onceki egitim kitapciginda 'Tensor'lari ve onlar ustunde kullanabileceginiz operasyonlari tanittik. Bu kitapcikta ise makine ogrenmesi modellerinin eniyilenmesinde onemli bir teknik olan [otomatik degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) ogrenecegiz. Kurulum
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Egim bantlariTensorFlow'un [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API'si otomatik degisim yani girdi degiskenlerine bagli olarak hesaplanan egimin hesaplanisini hali hazirda bize saglar. Tensorflow `tf.GradientTape` kapsaminda yapilan butun operasyonlari bir "tape(bant)"e "kaydeder". Tensorflow daha sonra "kaydedilmis" egimleri, bu bant ve her bir kayitla iliskili egim verilerini [ters mod degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) kullanarak hesaplar.Ornegin:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Orjinal girdi tensoru x'e gore z'nin turevi
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
Ayrica "kaydedilmis" 'tf.GradientTape' kapsaminda hesaplanan ara degerlere gore ciktilari egimini de isteyebilirsiniz.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Banti kullanarak ara deger y'ye gore z'nin turevini hesaplayabiliriz.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
GradientTape.gradient() yontemini cagirdimizda GradientTape tarafindan tutulan kaynaklar serbest birakilir. Ayni degerleri kullanarak birden fazla egim hesaplamak istiyorsaniz 'persistent(kalici)' egim banti olusturmalisiniz. Bu sayede bant nesnesi cop toplayicisi tarafindan toplanip kaynaklar serbest birakildikca 'gradient()' yontemini bircok kere cagirmamiza izin verir. Ornegin:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Referansi banta indirgeyelim
###Output
_____no_output_____
###Markdown
Kontrol akimini kaydedelimBantlar operasyonlar yurutuldukce kaydettigi icin, Python kontrol akimlari (`if`ler ve `while`lar gibi) dogal olarak islenir:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Yuksek-sirali egimler`GradientTape` kapsam yoneticisindeki operasyonlar otomatik degisim icin kaydedilir. Eger egimler bu kapsamda hesaplandiysa onlar da ayni sekilde kaydedilir. Sonuc olarak, ayni API'yi kullanarak yuksek-sirali egimleri hesaplayabiliriz. Ornegin:
###Code
x = tf.Variable(1.0) # 1.0 degerine ilklenmis bir Tensorflow degiskeni olusturalim
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# 't' kapsam yoneticisi icerisinde egimi hesaplayalim
# ki bu egim hesaplanmasinin turevlenebilir oldugu anlamina gelir.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Otomatik degisim ve egim banti Run in Google Colab View source on GitHub Bir onceki egitim kitapciginda 'Tensor'lari ve onlar ustunde kullanabileceginiz operasyonlari tanittik. Bu kitapcikta ise makine ogrenmesi modellerinin eniyilenmesinde onemli bir teknik olan [otomatik degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) ogrenecegiz. Kurulum
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Egim bantlariTensorFlow'un [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API'si otomatik degisim yani girdi degiskenlerine bagli olarak hesaplanan egimin hesaplanisini hali hazirda bize saglar. Tensorflow `tf.GradientTape` kapsaminda yapilan butun operasyonlari bir "tape(bant)"e "kaydeder". Tensorflow daha sonra "kaydedilmis" egimleri, bu bant ve her bir kayitla iliskili egim verilerini [ters mod degisimi](https://en.wikipedia.org/wiki/Automatic_differentiation) kullanarak hesaplar.Ornegin:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Orjinal girdi tensoru x'e gore z'nin turevi
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
Ayrica "kaydedilmis" 'tf.GradientTape' kapsaminda hesaplanan ara degerlere gore ciktilari egimini de isteyebilirsiniz.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Banti kullanarak ara deger y'ye gore z'nin turevini hesaplayabiliriz.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
GradientTape.gradient() yontemini cagirdimizda GradientTape tarafindan tutulan kaynaklar serbest birakilir. Ayni degerleri kullanarak birden fazla egim hesaplamak istiyorsaniz 'persistent(kalici)' egim banti olusturmalisiniz. Bu sayede bant nesnesi cop toplayicisi tarafindan toplanip kaynaklar serbest birakildikca 'gradient()' yontemini bircok kere cagirmamiza izin verir. Ornegin:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Referansi banta indirgeyelim
###Output
_____no_output_____
###Markdown
Kontrol akimini kaydedelimBantlar operasyonlar yurutuldukce kaydettigi icin, Python kontrol akimlari (`if`ler ve `while`lar gibi) dogal olarak islenir:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Yuksek-sirali egimler`GradientTape` kapsam yoneticisindeki operasyonlar otomatik degisim icin kaydedilir. Eger egimler bu kapsamda hesaplandiysa onlar da ayni sekilde kaydedilir. Sonuc olarak, ayni API'yi kullanarak yuksek-sirali egimleri hesaplayabiliriz. Ornegin:
###Code
x = tf.Variable(1.0) # 1.0 degerine ilklenmis bir Tensorflow degiskeni olusturalim
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# 't' kapsam yoneticisi icerisinde egimi hesaplayalim
# ki bu egim hesaplanmasinin turevlenebilir oldugu anlamina gelir.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____ |
notebooks/MultipleLinearRegression.ipynb | ###Markdown
Multiple linear regressionThis exercise is to try and replicate and then improve the results obtained in https://pubs.acs.org/doi/abs/10.1021/ci9901338.You've covered performing linear regression using a single feature in a previous notebook.We can use each descriptor as a variable in a linear regression to predict Log S.1. The paper uses particular descriptors and has been able to get an $R^2$ of 0.88 - can you replicate this? If you get different results, why? 2. Can you beat it using different/additional descriptors?3. At what point are you at risk of [overfitting](https://en.wikipedia.org/wiki/Overfitting)? Notes:* A reminder for selecting multiple columns in a pandas dataframe: x = data_desc[["a","b","c"]] y = data["y"] model.fit(x,y) * You can use whichever validation technique you prefer, or if you wish, can match that used in the paper. * The authors used leave-one-out cross validation (a single sample is held out rather than a number), and then test1 to evaluate model performance. I have given the code for LeaveOneOut as it is a bit tricky. * You may prefer to use an alternative cross validation approach, like you've seen in the previous notebook.* You may not be able to use the exact same descriptors, so find the closest match - some may not even be available to you.* It is worth using both MSE and $R^2$ to look at your model performance.* It can be helpful to see scatter plots - but remember these are 2D. (If you have more than one feature, your data will be of higher dimensions).* Feel free to refer back to previous notebooks.Steps to include:1. Load in python modules2. Load in data - I have demonstrated how to load 'train'.3. Select descriptors5. Train model and evaluate performance using cross validation, and then test using test1.Optional extras:1. Use a [decision tree](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html), [for more info](http://scikit-learn.org/stable/modules/tree.htmltree)2. Use a [random forest model](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) with all descriptors, [for more info](http://scikit-learn.org/stable/modules/ensemble.htmlforest)2. Draw molecules for outliers - the molecules with a high difference between the predicted Log S and the actual Log S - do they have anything in common?
###Code
import pandas as pd
#training set:
train_desc = pd.read_csv("../data/train_desc.csv",index_col=0)
train = pd.read_csv("../data/train_nonull.csv", index_col=0)
train_desc["Y"] = train["Log S"].values
#test1 set:
test1_desc = pd.read_csv("../data/test1_desc.csv",index_col=0) #replace with correct code]
test1 = pd.read_csv("../data/test1_nonull.csv", index_col=0) #replace with correct code
test1_desc["Y"] = test1["Log S"].values #replace with correct code
train_desc.head()
#code for leave one out cross validation -this code will not work unless you have defined a model and features
from sklearn.model_selection import LeaveOneOut, cross_validate
from sklearn.metrics import r2_score, mean_squared_error
loo = LeaveOneOut()
predictions = [] #creates an empty list so we can save the predictions
for sub_train, sub_test in loo.split(train_desc): #loo.split is a generator object, in each iteration sub_test is one sample,
#sub_train are all the rest
x = train_desc.loc[sub_train][features] #save x
y = train_desc.loc[sub_train]["Y"] #save y
model.fit(x,y) #fit the model
test_x = train_desc.loc[sub_test][features] #predict the value for the single sample
predictions.append(model.predict(test_x)[0]) #append the prediction to a list, we use [0] to state the first (and only) item in the returned array
#work here
###Output
_____no_output_____ |
courses/udacity_intro_to_tensorflow_for_deep_learning/l01c01_introduction_to_colab_and_python.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud.For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and IterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lots of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instances
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's excersises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud. For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
# Never mind this statement, for compatibility reasons
from __future__ import absolute_import, division, print_function
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and InterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(i, r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lot's of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(a), type(a[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instancs
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud. For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
# Never mind this statement, for compatibility reasons
from __future__ import absolute_import, division, print_function
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and InterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lot's of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instancs
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud.For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
# Never mind this statement, for compatibility reasons
from __future__ import absolute_import, division, print_function, unicode_literals
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and IterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lots of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instances
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud.For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and IterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lots of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instances
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud.For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
print("Hello World")
###Output
Hello World
###Markdown
Functions, Conditionals, and IterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
Hello World, x was < 10
3
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lots of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instances
!pip install numpy
###Output
Install numpy
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (1.19.5)
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud.For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
# Never mind this statement, for compatibility reasons
from __future__ import absolute_import, division, print_function, unicode_literals
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and IterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lots of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instances
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud. For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
# Never mind this statement, for compatibility reasons
from __future__ import absolute_import, division, print_function, unicode_literals
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and IterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lot's of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instancs
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
anmol kumar **Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud.For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
print("Hello World")
###Output
Hello World
###Markdown
Functions, Conditionals, and IterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lots of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instances
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud. For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
# Never mind this statement, for compatibility reasons
from __future__ import absolute_import, division, print_function, unicode_literals
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and IterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lot's of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instancs
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud.For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and IterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lots of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instances
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud. For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
# Never mind this statement, for compatibility reasons
from __future__ import absolute_import, division, print_function
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and IterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lot's of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instancs
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud.For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and IterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lots of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instances
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud.For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and IterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
Hello World, x was < 10
3
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lots of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
You can print the dimensions of arrays
Shape of a: (3,)
Shape of b: (5,)
Shape of c: (3, 3)
...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instances
!pip install numpy
###Output
Install numpy
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (1.18.5)
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls
print("Hello")
###Output
/content
sample_data
Hello
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's excersises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud. For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
# Never mind this statement, for compatibility reasons
from __future__ import absolute_import, division, print_function
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and InterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lot's of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instancs
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Introduction to Colab and Python** Run in Google Colab View source on GitHub Welcome to this Colab where you will get a quick introduction to the Python programming language and the environment used for the course's exercises: Colab.Colab is a Python development environment that runs in the browser using Google Cloud. For example, to print "Hello World", just hover the mouse over [ ] and press the play button to the upper left. Or press shift-enter to execute.
###Code
# Never mind this statement, for compatibility reasons
from __future__ import absolute_import, division, print_function
print("Hello World")
###Output
_____no_output_____
###Markdown
Functions, Conditionals, and InterationLet's create a Python function, and call it from a loop.
###Code
def HelloWorldXY(x, y):
if (x < 10):
print("Hello World, x was < 10")
elif (x < 20):
print("Hello World, x was >= 10 but < 20")
else:
print("Hello World, x was >= 20")
return x + y
for i in range(8, 25, 5): # i=8, 13, 18, 23 (start, stop, step)
print("--- Now running with i: {}".format(i))
r = HelloWorldXY(i,i)
print("Result from HelloWorld: {}".format(r))
print(HelloWorldXY(1,2))
###Output
_____no_output_____
###Markdown
Easy, right?If you want a loop starting at 0 to 2 (exclusive) you could do any of the following
###Code
print("Iterate over the items. `range(2)` is like a list [0,1].")
for i in range(2):
print(i)
print("Iterate over an actual list.")
for i in [0,1]:
print(i)
print("While works")
i = 0
while i < 2:
print(i)
i += 1
print("Python supports standard key words like continue and break")
while True:
print("Entered while")
break
###Output
_____no_output_____
###Markdown
Numpy and listsPython has lists built into the language.However, we will use a library called numpy for this.Numpy gives you lot's of support functions that are useful when doing Machine Learning.Here, you will also see an import statement. This statement makes the entire numpy package available and we can access those symbols using the abbreviated 'np' syntax.
###Code
import numpy as np # Make numpy available using np.
# Create a numpy array, and append an element
a = np.array(["Hello", "World"])
a = np.append(a, "!")
print("Current array: {}".format(a))
print("Printing each element")
for i in a:
print(i)
print("\nPrinting each element and their index")
for i,e in enumerate(a):
print("Index: {}, was: {}".format(i, e))
print("\nShowing some basic math on arrays")
b = np.array([0,1,4,3,2])
print("Max: {}".format(np.max(b)))
print("Average: {}".format(np.average(b)))
print("Max index: {}".format(np.argmax(b)))
print("\nYou can print the type of anything")
print("Type of b: {}, type of b[0]: {}".format(type(b), type(b[0])))
print("\nUse numpy to create a [3,3] dimension array with random number")
c = np.random.rand(3, 3)
print(c)
print("\nYou can print the dimensions of arrays")
print("Shape of a: {}".format(a.shape))
print("Shape of b: {}".format(b.shape))
print("Shape of c: {}".format(c.shape))
print("...Observe, Python uses both [0,1,2] and (0,1,2) to specify lists")
###Output
_____no_output_____
###Markdown
Colab Specifics Colab is a virtual machine you can access directly. To run commands at the VM's terminal, prefix the line with an exclamation point (!).
###Code
print("\nDoing $ls on filesystem")
!ls -l
!pwd
print("Install numpy") # Just for test, numpy is actually preinstalled in all Colab instancs
!pip install numpy
###Output
_____no_output_____
###Markdown
**Exercise**Create a code cell underneath this text cell and add code to:* List the path of the current directory (pwd)* Go to / (cd) and list the content (ls -l)
###Code
!pwd
!cd /
!ls -l
print("Hello")
###Output
_____no_output_____ |
site/en-snapshot/guide/keras/custom_layers_and_models.ipynb | ###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Making new Layers and Models via subclassing View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Setup
###Code
import tensorflow as tf
from tensorflow import keras
###Output
_____no_output_____
###Markdown
The `Layer` class: the combination of state (weights) and some computationOne of the central abstraction in Keras is the `Layer` class. A layerencapsulates both a state (the layer's "weights") and a transformation frominputs to outputs (a "call", the layer's forward pass).Here's a densely-connected layer. It has a state: the variables `w` and `b`.
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_dim, units), dtype="float32"),
trainable=True,
)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(units,), dtype="float32"), trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
###Output
_____no_output_____
###Markdown
You would use a layer by calling it on some tensor input(s), much like a Pythonfunction.
###Code
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
###Output
_____no_output_____
###Markdown
Note that the weights `w` and `b` are automatically tracked by the layer uponbeing set as layer attributes:
###Code
assert linear_layer.weights == [linear_layer.w, linear_layer.b]
###Output
_____no_output_____
###Markdown
Note you also have access to a quicker shortcut for adding weight to a layer:the `add_weight()` method:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
###Output
_____no_output_____
###Markdown
Layers can have non-trainable weightsBesides trainable weights, you can add non-trainable weights to a layer aswell. Such weights are meant not to be taken into account duringbackpropagation, when you are training the layer.Here's how to add and use a non-trainable weight:
###Code
class ComputeSum(keras.layers.Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
x = tf.ones((2, 2))
my_sum = ComputeSum(2)
y = my_sum(x)
print(y.numpy())
y = my_sum(x)
print(y.numpy())
###Output
_____no_output_____
###Markdown
It's part of `layer.weights`, but it gets categorized as a non-trainable weight:
###Code
print("weights:", len(my_sum.weights))
print("non-trainable weights:", len(my_sum.non_trainable_weights))
# It's not included in the trainable weights:
print("trainable_weights:", my_sum.trainable_weights)
###Output
_____no_output_____
###Markdown
Best practice: deferring weight creation until the shape of the inputs is knownOur `Linear` layer above took an `input_dim `argument that was used to computethe shape of the weights `w` and `b` in `__init__()`:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
###Output
_____no_output_____
###Markdown
In many cases, you may not know in advance the size of your inputs, and youwould like to lazily create weights when that value becomes known, some timeafter instantiating the layer.In the Keras API, we recommend creating layer weights in the `build(self,inputs_shape)` method of your layer. Like this:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
###Output
_____no_output_____
###Markdown
The `__call__()` method of your layer will automatically run build the first timeit is called. You now have a layer that's lazy and thus easier to use:
###Code
# At instantiation, we don't know on what inputs this is going to get called
linear_layer = Linear(32)
# The layer's weights are created dynamically the first time the layer is called
y = linear_layer(x)
###Output
_____no_output_____
###Markdown
Layers are recursively composableIf you assign a Layer instance as an attribute of another Layer, the outer layerwill start tracking the weights of the inner layer.We recommend creating such sublayers in the `__init__()` method (since thesublayers will typically have a build method, they will be built when theouter layer gets built).
###Code
# Let's assume we are reusing the Linear class
# with a `build` method that we defined above.
class MLPBlock(keras.layers.Layer):
def __init__(self):
super(MLPBlock, self).__init__()
self.linear_1 = Linear(32)
self.linear_2 = Linear(32)
self.linear_3 = Linear(1)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.linear_2(x)
x = tf.nn.relu(x)
return self.linear_3(x)
mlp = MLPBlock()
y = mlp(tf.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights
print("weights:", len(mlp.weights))
print("trainable weights:", len(mlp.trainable_weights))
###Output
_____no_output_____
###Markdown
The `add_loss()` methodWhen writing the `call()` method of a layer, you can create loss tensors thatyou will want to use later, when writing your training loop. This is doable bycalling `self.add_loss(value)`:
###Code
# A layer that creates an activity regularization loss
class ActivityRegularizationLayer(keras.layers.Layer):
def __init__(self, rate=1e-2):
super(ActivityRegularizationLayer, self).__init__()
self.rate = rate
def call(self, inputs):
self.add_loss(self.rate * tf.reduce_sum(inputs))
return inputs
###Output
_____no_output_____
###Markdown
These losses (including those created by any inner layer) can be retrieved via`layer.losses`. This property is reset at the start of every `__call__()` tothe top-level layer, so that `layer.losses` always contains the loss valuescreated during the last forward pass.
###Code
class OuterLayer(keras.layers.Layer):
def __init__(self):
super(OuterLayer, self).__init__()
self.activity_reg = ActivityRegularizationLayer(1e-2)
def call(self, inputs):
return self.activity_reg(inputs)
layer = OuterLayer()
assert len(layer.losses) == 0 # No losses yet since the layer has never been called
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # We created one loss value
# `layer.losses` gets reset at the start of each __call__
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # This is the loss created during the call above
###Output
_____no_output_____
###Markdown
In addition, the `loss` property also contains regularization losses createdfor the weights of any inner layer:
###Code
class OuterLayerWithKernelRegularizer(keras.layers.Layer):
def __init__(self):
super(OuterLayerWithKernelRegularizer, self).__init__()
self.dense = keras.layers.Dense(
32, kernel_regularizer=tf.keras.regularizers.l2(1e-3)
)
def call(self, inputs):
return self.dense(inputs)
layer = OuterLayerWithKernelRegularizer()
_ = layer(tf.zeros((1, 1)))
# This is `1e-3 * sum(layer.dense.kernel ** 2)`,
# created by the `kernel_regularizer` above.
print(layer.losses)
###Output
_____no_output_____
###Markdown
These losses are meant to be taken into account when writing training loops,like this:```python Instantiate an optimizer.optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) Iterate over the batches of a dataset.for x_batch_train, y_batch_train in train_dataset: with tf.GradientTape() as tape: logits = layer(x_batch_train) Logits for this minibatch Loss value for this minibatch loss_value = loss_fn(y_batch_train, logits) Add extra losses created during this forward pass: loss_value += sum(model.losses) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights))``` For a detailed guide about writing training loops, see the[guide to writing a training loop from scratch](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch/).These losses also work seamlessly with `fit()` (they get automatically summedand added to the main loss, if any):
###Code
import numpy as np
inputs = keras.Input(shape=(3,))
outputs = ActivityRegularizationLayer()(inputs)
model = keras.Model(inputs, outputs)
# If there is a loss passed in `compile`, thee regularization
# losses get added to it
model.compile(optimizer="adam", loss="mse")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
# It's also possible not to pass any loss in `compile`,
# since the model already has a loss to minimize, via the `add_loss`
# call during the forward pass!
model.compile(optimizer="adam")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
###Output
_____no_output_____
###Markdown
The `add_metric()` methodSimilarly to `add_loss()`, layers also have an `add_metric()` methodfor tracking the moving average of a quantity during training.Consider the following layer: a "logistic endpoint" layer.It takes as inputs predictions & targets, it computes a loss which it tracksvia `add_loss()`, and it computes an accuracy scalar, which it tracks via`add_metric()`.
###Code
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
###Output
_____no_output_____
###Markdown
Metrics tracked in this way are accessible via `layer.metrics`:
###Code
layer = LogisticEndpoint()
targets = tf.ones((2, 2))
logits = tf.ones((2, 2))
y = layer(targets, logits)
print("layer.metrics:", layer.metrics)
print("current accuracy value:", float(layer.metrics[0].result()))
###Output
_____no_output_____
###Markdown
Just like for `add_loss()`, these metrics are tracked by `fit()`:
###Code
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam")
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
###Output
_____no_output_____
###Markdown
You can optionally enable serialization on your layersIf you need your custom layers to be serializable as part of a[Functional model](https://www.tensorflow.org/guide/keras/functional/), you can optionally implement a `get_config()`method:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {"units": self.units}
# Now you can recreate the layer from its config:
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
###Output
_____no_output_____
###Markdown
Note that the `__init__()` method of the base `Layer` class takes some keywordarguments, in particular a `name` and a `dtype`. It's good practice to passthese arguments to the parent class in `__init__()` and to include them in thelayer config:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(Linear, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(Linear, self).get_config()
config.update({"units": self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
###Output
_____no_output_____
###Markdown
If you need more flexibility when deserializing the layer from its config, youcan also override the `from_config()` class method. This is the baseimplementation of `from_config()`:```pythondef from_config(cls, config): return cls(**config)```To learn more about serialization and saving, see the complete[guide to saving and serializing models](https://www.tensorflow.org/guide/keras/save_and_serialize/). Privileged `training` argument in the `call()` methodSome layers, in particular the `BatchNormalization` layer and the `Dropout`layer, have different behaviors during training and inference. For suchlayers, it is standard practice to expose a `training` (boolean) argument inthe `call()` method.By exposing this argument in `call()`, you enable the built-in training andevaluation loops (e.g. `fit()`) to correctly use the layer in training andinference.
###Code
class CustomDropout(keras.layers.Layer):
def __init__(self, rate, **kwargs):
super(CustomDropout, self).__init__(**kwargs)
self.rate = rate
def call(self, inputs, training=None):
if training:
return tf.nn.dropout(inputs, rate=self.rate)
return inputs
###Output
_____no_output_____
###Markdown
Privileged `mask` argument in the `call()` methodThe other privileged argument supported by `call()` is the `mask` argument.You will find it in all Keras RNN layers. A mask is a boolean tensor (oneboolean value per timestep in the input) used to skip certain input timestepswhen processing timeseries data.Keras will automatically pass the correct `mask` argument to `__call__()` forlayers that support it, when a mask is generated by a prior layer.Mask-generating layers are the `Embedding`layer configured with `mask_zero=True`, and the `Masking` layer.To learn more about masking and how to write masking-enabled layers, pleasecheck out the guide["understanding padding and masking"](https://www.tensorflow.org/guide/keras/masking_and_padding/). The `Model` classIn general, you will use the `Layer` class to define inner computation blocks,and will use the `Model` class to define the outer model -- the object youwill train.For instance, in a ResNet50 model, you would have several ResNet blockssubclassing `Layer`, and a single `Model` encompassing the entire ResNet50network.The `Model` class has the same API as `Layer`, with the following differences:- It exposes built-in training, evaluation, and prediction loops(`model.fit()`, `model.evaluate()`, `model.predict()`).- It exposes the list of its inner layers, via the `model.layers` property.- It exposes saving and serialization APIs (`save()`, `save_weights()`...)Effectively, the `Layer` class corresponds to what we refer to in theliterature as a "layer" (as in "convolution layer" or "recurrent layer") or asa "block" (as in "ResNet block" or "Inception block").Meanwhile, the `Model` class corresponds to what is referred to in theliterature as a "model" (as in "deep learning model") or as a "network" (as in"deep neural network").So if you're wondering, "should I use the `Layer` class or the `Model` class?",ask yourself: will I need to call `fit()` on it? Will I need to call `save()`on it? If so, go with `Model`. If not (either because your class is just a blockin a bigger system, or because you are writing training & saving code yourself),use `Layer`.For instance, we could take our mini-resnet example above, and use it to builda `Model` that we could train with `fit()`, and that we could save with`save_weights()`: ```pythonclass ResNet(tf.keras.Model): def __init__(self): super(ResNet, self).__init__() self.block_1 = ResNetBlock() self.block_2 = ResNetBlock() self.global_pool = layers.GlobalAveragePooling2D() self.classifier = Dense(num_classes) def call(self, inputs): x = self.block_1(inputs) x = self.block_2(x) x = self.global_pool(x) return self.classifier(x)resnet = ResNet()dataset = ...resnet.fit(dataset, epochs=10)resnet.save(filepath)``` Putting it all together: an end-to-end exampleHere's what you've learned so far:- A `Layer` encapsulate a state (created in `__init__()` or `build()`) and somecomputation (defined in `call()`).- Layers can be recursively nested to create new, bigger computation blocks.- Layers can create and track losses (typically regularization losses) as wellas metrics, via `add_loss()` and `add_metric()`- The outer container, the thing you want to train, is a `Model`. A `Model` isjust like a `Layer`, but with added training and serialization utilities.Let's put all of these things together into an end-to-end example: we're goingto implement a Variational AutoEncoder (VAE). We'll train it on MNIST digits.Our VAE will be a subclass of `Model`, built as a nested composition of layersthat subclass `Layer`. It will feature a regularization loss (KL divergence).
###Code
from tensorflow.keras import layers
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
class Encoder(layers.Layer):
"""Maps MNIST digits to a triplet (z_mean, z_log_var, z)."""
def __init__(self, latent_dim=32, intermediate_dim=64, name="encoder", **kwargs):
super(Encoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_mean = layers.Dense(latent_dim)
self.dense_log_var = layers.Dense(latent_dim)
self.sampling = Sampling()
def call(self, inputs):
x = self.dense_proj(inputs)
z_mean = self.dense_mean(x)
z_log_var = self.dense_log_var(x)
z = self.sampling((z_mean, z_log_var))
return z_mean, z_log_var, z
class Decoder(layers.Layer):
"""Converts z, the encoded digit vector, back into a readable digit."""
def __init__(self, original_dim, intermediate_dim=64, name="decoder", **kwargs):
super(Decoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_output = layers.Dense(original_dim, activation="sigmoid")
def call(self, inputs):
x = self.dense_proj(inputs)
return self.dense_output(x)
class VariationalAutoEncoder(keras.Model):
"""Combines the encoder and decoder into an end-to-end model for training."""
def __init__(
self,
original_dim,
intermediate_dim=64,
latent_dim=32,
name="autoencoder",
**kwargs
):
super(VariationalAutoEncoder, self).__init__(name=name, **kwargs)
self.original_dim = original_dim
self.encoder = Encoder(latent_dim=latent_dim, intermediate_dim=intermediate_dim)
self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)
def call(self, inputs):
z_mean, z_log_var, z = self.encoder(inputs)
reconstructed = self.decoder(z)
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1
)
self.add_loss(kl_loss)
return reconstructed
###Output
_____no_output_____
###Markdown
Let's write a simple training loop on MNIST:
###Code
original_dim = 784
vae = VariationalAutoEncoder(original_dim, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
mse_loss_fn = tf.keras.losses.MeanSquaredError()
loss_metric = tf.keras.metrics.Mean()
(x_train, _), _ = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype("float32") / 255
train_dataset = tf.data.Dataset.from_tensor_slices(x_train)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
epochs = 2
# Iterate over epochs.
for epoch in range(epochs):
print("Start of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, x_batch_train in enumerate(train_dataset):
with tf.GradientTape() as tape:
reconstructed = vae(x_batch_train)
# Compute reconstruction loss
loss = mse_loss_fn(x_batch_train, reconstructed)
loss += sum(vae.losses) # Add KLD regularization loss
grads = tape.gradient(loss, vae.trainable_weights)
optimizer.apply_gradients(zip(grads, vae.trainable_weights))
loss_metric(loss)
if step % 100 == 0:
print("step %d: mean loss = %.4f" % (step, loss_metric.result()))
###Output
_____no_output_____
###Markdown
Note that since the VAE is subclassing `Model`, it features built-in trainingloops. So you could also have trained it like this:
###Code
vae = VariationalAutoEncoder(784, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=2, batch_size=64)
###Output
_____no_output_____
###Markdown
Beyond object-oriented development: the Functional APIWas this example too much object-oriented development for you? You can alsobuild models using the [Functional API](https://www.tensorflow.org/guide/keras/functional/). Importantly,choosing one style or another does not prevent you from leveraging componentswritten in the other style: you can always mix-and-match.For instance, the Functional API example below reuses the same `Sampling` layerwe defined in the example above:
###Code
original_dim = 784
intermediate_dim = 64
latent_dim = 32
# Define encoder model.
original_inputs = tf.keras.Input(shape=(original_dim,), name="encoder_input")
x = layers.Dense(intermediate_dim, activation="relu")(original_inputs)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = Sampling()((z_mean, z_log_var))
encoder = tf.keras.Model(inputs=original_inputs, outputs=z, name="encoder")
# Define decoder model.
latent_inputs = tf.keras.Input(shape=(latent_dim,), name="z_sampling")
x = layers.Dense(intermediate_dim, activation="relu")(latent_inputs)
outputs = layers.Dense(original_dim, activation="sigmoid")(x)
decoder = tf.keras.Model(inputs=latent_inputs, outputs=outputs, name="decoder")
# Define VAE model.
outputs = decoder(z)
vae = tf.keras.Model(inputs=original_inputs, outputs=outputs, name="vae")
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)
vae.add_loss(kl_loss)
# Train.
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=3, batch_size=64)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Making new Layers and Models via subclassing View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Setup
###Code
import tensorflow as tf
from tensorflow import keras
###Output
_____no_output_____
###Markdown
The `Layer` class: the combination of state (weights) and some computationOne of the central abstraction in Keras is the `Layer` class. A layerencapsulates both a state (the layer's "weights") and a transformation frominputs to outputs (a "call", the layer's forward pass).Here's a densely-connected layer. It has a state: the variables `w` and `b`.
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_dim, units), dtype="float32"),
trainable=True,
)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(units,), dtype="float32"), trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
###Output
_____no_output_____
###Markdown
You would use a layer by calling it on some tensor input(s), much like a Pythonfunction.
###Code
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
###Output
_____no_output_____
###Markdown
Note that the weights `w` and `b` are automatically tracked by the layer uponbeing set as layer attributes:
###Code
assert linear_layer.weights == [linear_layer.w, linear_layer.b]
###Output
_____no_output_____
###Markdown
Note you also have access to a quicker shortcut for adding weight to a layer:the `add_weight()` method:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
###Output
_____no_output_____
###Markdown
Layers can have non-trainable weightsBesides trainable weights, you can add non-trainable weights to a layer aswell. Such weights are meant not to be taken into account duringbackpropagation, when you are training the layer.Here's how to add and use a non-trainable weight:
###Code
class ComputeSum(keras.layers.Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
x = tf.ones((2, 2))
my_sum = ComputeSum(2)
y = my_sum(x)
print(y.numpy())
y = my_sum(x)
print(y.numpy())
###Output
_____no_output_____
###Markdown
It's part of `layer.weights`, but it gets categorized as a non-trainable weight:
###Code
print("weights:", len(my_sum.weights))
print("non-trainable weights:", len(my_sum.non_trainable_weights))
# It's not included in the trainable weights:
print("trainable_weights:", my_sum.trainable_weights)
###Output
_____no_output_____
###Markdown
Best practice: deferring weight creation until the shape of the inputs is knownOur `Linear` layer above took an `input_dim `argument that was used to computethe shape of the weights `w` and `b` in `__init__()`:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
###Output
_____no_output_____
###Markdown
In many cases, you may not know in advance the size of your inputs, and youwould like to lazily create weights when that value becomes known, some timeafter instantiating the layer.In the Keras API, we recommend creating layer weights in the `build(self,inputs_shape)` method of your layer. Like this:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
###Output
_____no_output_____
###Markdown
The `__call__()` method of your layer will automatically run build the first timeit is called. You now have a layer that's lazy and thus easier to use:
###Code
# At instantiation, we don't know on what inputs this is going to get called
linear_layer = Linear(32)
# The layer's weights are created dynamically the first time the layer is called
y = linear_layer(x)
###Output
_____no_output_____
###Markdown
Layers are recursively composableIf you assign a Layer instance as an attribute of another Layer, the outer layerwill start tracking the weights of the inner layer.We recommend creating such sublayers in the `__init__()` method (since thesublayers will typically have a build method, they will be built when theouter layer gets built).
###Code
# Let's assume we are reusing the Linear class
# with a `build` method that we defined above.
class MLPBlock(keras.layers.Layer):
def __init__(self):
super(MLPBlock, self).__init__()
self.linear_1 = Linear(32)
self.linear_2 = Linear(32)
self.linear_3 = Linear(1)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.linear_2(x)
x = tf.nn.relu(x)
return self.linear_3(x)
mlp = MLPBlock()
y = mlp(tf.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights
print("weights:", len(mlp.weights))
print("trainable weights:", len(mlp.trainable_weights))
###Output
_____no_output_____
###Markdown
The `add_loss()` methodWhen writing the `call()` method of a layer, you can create loss tensors thatyou will want to use later, when writing your training loop. This is doable bycalling `self.add_loss(value)`:
###Code
# A layer that creates an activity regularization loss
class ActivityRegularizationLayer(keras.layers.Layer):
def __init__(self, rate=1e-2):
super(ActivityRegularizationLayer, self).__init__()
self.rate = rate
def call(self, inputs):
self.add_loss(self.rate * tf.reduce_sum(inputs))
return inputs
###Output
_____no_output_____
###Markdown
These losses (including those created by any inner layer) can be retrieved via`layer.losses`. This property is reset at the start of every `__call__()` tothe top-level layer, so that `layer.losses` always contains the loss valuescreated during the last forward pass.
###Code
class OuterLayer(keras.layers.Layer):
def __init__(self):
super(OuterLayer, self).__init__()
self.activity_reg = ActivityRegularizationLayer(1e-2)
def call(self, inputs):
return self.activity_reg(inputs)
layer = OuterLayer()
assert len(layer.losses) == 0 # No losses yet since the layer has never been called
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # We created one loss value
# `layer.losses` gets reset at the start of each __call__
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # This is the loss created during the call above
###Output
_____no_output_____
###Markdown
In addition, the `loss` property also contains regularization losses createdfor the weights of any inner layer:
###Code
class OuterLayerWithKernelRegularizer(keras.layers.Layer):
def __init__(self):
super(OuterLayerWithKernelRegularizer, self).__init__()
self.dense = keras.layers.Dense(
32, kernel_regularizer=tf.keras.regularizers.l2(1e-3)
)
def call(self, inputs):
return self.dense(inputs)
layer = OuterLayerWithKernelRegularizer()
_ = layer(tf.zeros((1, 1)))
# This is `1e-3 * sum(layer.dense.kernel ** 2)`,
# created by the `kernel_regularizer` above.
print(layer.losses)
###Output
_____no_output_____
###Markdown
These losses are meant to be taken into account when writing training loops,like this:```python Instantiate an optimizer.optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) Iterate over the batches of a dataset.for x_batch_train, y_batch_train in train_dataset: with tf.GradientTape() as tape: logits = layer(x_batch_train) Logits for this minibatch Loss value for this minibatch loss_value = loss_fn(y_batch_train, logits) Add extra losses created during this forward pass: loss_value += sum(model.losses) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights))``` For a detailed guide about writing training loops, see the[guide to writing a training loop from scratch](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch/).These losses also work seamlessly with `fit()` (they get automatically summedand added to the main loss, if any):
###Code
import numpy as np
inputs = keras.Input(shape=(3,))
outputs = ActivityRegularizationLayer()(inputs)
model = keras.Model(inputs, outputs)
# If there is a loss passed in `compile`, the regularization
# losses get added to it
model.compile(optimizer="adam", loss="mse")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
# It's also possible not to pass any loss in `compile`,
# since the model already has a loss to minimize, via the `add_loss`
# call during the forward pass!
model.compile(optimizer="adam")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
###Output
_____no_output_____
###Markdown
The `add_metric()` methodSimilarly to `add_loss()`, layers also have an `add_metric()` methodfor tracking the moving average of a quantity during training.Consider the following layer: a "logistic endpoint" layer.It takes as inputs predictions & targets, it computes a loss which it tracksvia `add_loss()`, and it computes an accuracy scalar, which it tracks via`add_metric()`.
###Code
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
###Output
_____no_output_____
###Markdown
Metrics tracked in this way are accessible via `layer.metrics`:
###Code
layer = LogisticEndpoint()
targets = tf.ones((2, 2))
logits = tf.ones((2, 2))
y = layer(targets, logits)
print("layer.metrics:", layer.metrics)
print("current accuracy value:", float(layer.metrics[0].result()))
###Output
_____no_output_____
###Markdown
Just like for `add_loss()`, these metrics are tracked by `fit()`:
###Code
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam")
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
###Output
_____no_output_____
###Markdown
You can optionally enable serialization on your layersIf you need your custom layers to be serializable as part of a[Functional model](https://www.tensorflow.org/guide/keras/functional/), you can optionally implement a `get_config()`method:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {"units": self.units}
# Now you can recreate the layer from its config:
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
###Output
_____no_output_____
###Markdown
Note that the `__init__()` method of the base `Layer` class takes some keywordarguments, in particular a `name` and a `dtype`. It's good practice to passthese arguments to the parent class in `__init__()` and to include them in thelayer config:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(Linear, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(Linear, self).get_config()
config.update({"units": self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
###Output
_____no_output_____
###Markdown
If you need more flexibility when deserializing the layer from its config, youcan also override the `from_config()` class method. This is the baseimplementation of `from_config()`:```pythondef from_config(cls, config): return cls(**config)```To learn more about serialization and saving, see the complete[guide to saving and serializing models](https://www.tensorflow.org/guide/keras/save_and_serialize/). Privileged `training` argument in the `call()` methodSome layers, in particular the `BatchNormalization` layer and the `Dropout`layer, have different behaviors during training and inference. For suchlayers, it is standard practice to expose a `training` (boolean) argument inthe `call()` method.By exposing this argument in `call()`, you enable the built-in training andevaluation loops (e.g. `fit()`) to correctly use the layer in training andinference.
###Code
class CustomDropout(keras.layers.Layer):
def __init__(self, rate, **kwargs):
super(CustomDropout, self).__init__(**kwargs)
self.rate = rate
def call(self, inputs, training=None):
if training:
return tf.nn.dropout(inputs, rate=self.rate)
return inputs
###Output
_____no_output_____
###Markdown
Privileged `mask` argument in the `call()` methodThe other privileged argument supported by `call()` is the `mask` argument.You will find it in all Keras RNN layers. A mask is a boolean tensor (oneboolean value per timestep in the input) used to skip certain input timestepswhen processing timeseries data.Keras will automatically pass the correct `mask` argument to `__call__()` forlayers that support it, when a mask is generated by a prior layer.Mask-generating layers are the `Embedding`layer configured with `mask_zero=True`, and the `Masking` layer.To learn more about masking and how to write masking-enabled layers, pleasecheck out the guide["understanding padding and masking"](https://www.tensorflow.org/guide/keras/masking_and_padding/). The `Model` classIn general, you will use the `Layer` class to define inner computation blocks,and will use the `Model` class to define the outer model -- the object youwill train.For instance, in a ResNet50 model, you would have several ResNet blockssubclassing `Layer`, and a single `Model` encompassing the entire ResNet50network.The `Model` class has the same API as `Layer`, with the following differences:- It exposes built-in training, evaluation, and prediction loops(`model.fit()`, `model.evaluate()`, `model.predict()`).- It exposes the list of its inner layers, via the `model.layers` property.- It exposes saving and serialization APIs (`save()`, `save_weights()`...)Effectively, the `Layer` class corresponds to what we refer to in theliterature as a "layer" (as in "convolution layer" or "recurrent layer") or asa "block" (as in "ResNet block" or "Inception block").Meanwhile, the `Model` class corresponds to what is referred to in theliterature as a "model" (as in "deep learning model") or as a "network" (as in"deep neural network").So if you're wondering, "should I use the `Layer` class or the `Model` class?",ask yourself: will I need to call `fit()` on it? Will I need to call `save()`on it? If so, go with `Model`. If not (either because your class is just a blockin a bigger system, or because you are writing training & saving code yourself),use `Layer`.For instance, we could take our mini-resnet example above, and use it to builda `Model` that we could train with `fit()`, and that we could save with`save_weights()`: ```pythonclass ResNet(tf.keras.Model): def __init__(self, num_classes=1000): super(ResNet, self).__init__() self.block_1 = ResNetBlock() self.block_2 = ResNetBlock() self.global_pool = layers.GlobalAveragePooling2D() self.classifier = Dense(num_classes) def call(self, inputs): x = self.block_1(inputs) x = self.block_2(x) x = self.global_pool(x) return self.classifier(x)resnet = ResNet()dataset = ...resnet.fit(dataset, epochs=10)resnet.save(filepath)``` Putting it all together: an end-to-end exampleHere's what you've learned so far:- A `Layer` encapsulate a state (created in `__init__()` or `build()`) and somecomputation (defined in `call()`).- Layers can be recursively nested to create new, bigger computation blocks.- Layers can create and track losses (typically regularization losses) as wellas metrics, via `add_loss()` and `add_metric()`- The outer container, the thing you want to train, is a `Model`. A `Model` isjust like a `Layer`, but with added training and serialization utilities.Let's put all of these things together into an end-to-end example: we're goingto implement a Variational AutoEncoder (VAE). We'll train it on MNIST digits.Our VAE will be a subclass of `Model`, built as a nested composition of layersthat subclass `Layer`. It will feature a regularization loss (KL divergence).
###Code
from tensorflow.keras import layers
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
class Encoder(layers.Layer):
"""Maps MNIST digits to a triplet (z_mean, z_log_var, z)."""
def __init__(self, latent_dim=32, intermediate_dim=64, name="encoder", **kwargs):
super(Encoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_mean = layers.Dense(latent_dim)
self.dense_log_var = layers.Dense(latent_dim)
self.sampling = Sampling()
def call(self, inputs):
x = self.dense_proj(inputs)
z_mean = self.dense_mean(x)
z_log_var = self.dense_log_var(x)
z = self.sampling((z_mean, z_log_var))
return z_mean, z_log_var, z
class Decoder(layers.Layer):
"""Converts z, the encoded digit vector, back into a readable digit."""
def __init__(self, original_dim, intermediate_dim=64, name="decoder", **kwargs):
super(Decoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_output = layers.Dense(original_dim, activation="sigmoid")
def call(self, inputs):
x = self.dense_proj(inputs)
return self.dense_output(x)
class VariationalAutoEncoder(keras.Model):
"""Combines the encoder and decoder into an end-to-end model for training."""
def __init__(
self,
original_dim,
intermediate_dim=64,
latent_dim=32,
name="autoencoder",
**kwargs
):
super(VariationalAutoEncoder, self).__init__(name=name, **kwargs)
self.original_dim = original_dim
self.encoder = Encoder(latent_dim=latent_dim, intermediate_dim=intermediate_dim)
self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)
def call(self, inputs):
z_mean, z_log_var, z = self.encoder(inputs)
reconstructed = self.decoder(z)
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1
)
self.add_loss(kl_loss)
return reconstructed
###Output
_____no_output_____
###Markdown
Let's write a simple training loop on MNIST:
###Code
original_dim = 784
vae = VariationalAutoEncoder(original_dim, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
mse_loss_fn = tf.keras.losses.MeanSquaredError()
loss_metric = tf.keras.metrics.Mean()
(x_train, _), _ = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype("float32") / 255
train_dataset = tf.data.Dataset.from_tensor_slices(x_train)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
epochs = 2
# Iterate over epochs.
for epoch in range(epochs):
print("Start of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, x_batch_train in enumerate(train_dataset):
with tf.GradientTape() as tape:
reconstructed = vae(x_batch_train)
# Compute reconstruction loss
loss = mse_loss_fn(x_batch_train, reconstructed)
loss += sum(vae.losses) # Add KLD regularization loss
grads = tape.gradient(loss, vae.trainable_weights)
optimizer.apply_gradients(zip(grads, vae.trainable_weights))
loss_metric(loss)
if step % 100 == 0:
print("step %d: mean loss = %.4f" % (step, loss_metric.result()))
###Output
_____no_output_____
###Markdown
Note that since the VAE is subclassing `Model`, it features built-in trainingloops. So you could also have trained it like this:
###Code
vae = VariationalAutoEncoder(784, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=2, batch_size=64)
###Output
_____no_output_____
###Markdown
Beyond object-oriented development: the Functional APIWas this example too much object-oriented development for you? You can alsobuild models using the [Functional API](https://www.tensorflow.org/guide/keras/functional/). Importantly,choosing one style or another does not prevent you from leveraging componentswritten in the other style: you can always mix-and-match.For instance, the Functional API example below reuses the same `Sampling` layerwe defined in the example above:
###Code
original_dim = 784
intermediate_dim = 64
latent_dim = 32
# Define encoder model.
original_inputs = tf.keras.Input(shape=(original_dim,), name="encoder_input")
x = layers.Dense(intermediate_dim, activation="relu")(original_inputs)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = Sampling()((z_mean, z_log_var))
encoder = tf.keras.Model(inputs=original_inputs, outputs=z, name="encoder")
# Define decoder model.
latent_inputs = tf.keras.Input(shape=(latent_dim,), name="z_sampling")
x = layers.Dense(intermediate_dim, activation="relu")(latent_inputs)
outputs = layers.Dense(original_dim, activation="sigmoid")(x)
decoder = tf.keras.Model(inputs=latent_inputs, outputs=outputs, name="decoder")
# Define VAE model.
outputs = decoder(z)
vae = tf.keras.Model(inputs=original_inputs, outputs=outputs, name="vae")
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)
vae.add_loss(kl_loss)
# Train.
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=3, batch_size=64)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Making new Layers and Models via subclassing View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Setup
###Code
import tensorflow as tf
from tensorflow import keras
###Output
_____no_output_____
###Markdown
The `Layer` class: the combination of state (weights) and some computationOne of the central abstraction in Keras is the `Layer` class. A layerencapsulates both a state (the layer's "weights") and a transformation frominputs to outputs (a "call", the layer's forward pass).Here's a densely-connected layer. It has a state: the variables `w` and `b`.
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_dim, units), dtype="float32"),
trainable=True,
)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(units,), dtype="float32"), trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
###Output
_____no_output_____
###Markdown
You would use a layer by calling it on some tensor input(s), much like a Pythonfunction.
###Code
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
###Output
_____no_output_____
###Markdown
Note that the weights `w` and `b` are automatically tracked by the layer uponbeing set as layer attributes:
###Code
assert linear_layer.weights == [linear_layer.w, linear_layer.b]
###Output
_____no_output_____
###Markdown
Note you also have access to a quicker shortcut for adding weight to a layer:the `add_weight()` method:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
###Output
_____no_output_____
###Markdown
Layers can have non-trainable weightsBesides trainable weights, you can add non-trainable weights to a layer aswell. Such weights are meant not to be taken into account duringbackpropagation, when you are training the layer.Here's how to add and use a non-trainable weight:
###Code
class ComputeSum(keras.layers.Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
x = tf.ones((2, 2))
my_sum = ComputeSum(2)
y = my_sum(x)
print(y.numpy())
y = my_sum(x)
print(y.numpy())
###Output
_____no_output_____
###Markdown
It's part of `layer.weights`, but it gets categorized as a non-trainable weight:
###Code
print("weights:", len(my_sum.weights))
print("non-trainable weights:", len(my_sum.non_trainable_weights))
# It's not included in the trainable weights:
print("trainable_weights:", my_sum.trainable_weights)
###Output
_____no_output_____
###Markdown
Best practice: deferring weight creation until the shape of the inputs is knownOur `Linear` layer above took an `input_dim `argument that was used to computethe shape of the weights `w` and `b` in `__init__()`:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
###Output
_____no_output_____
###Markdown
In many cases, you may not know in advance the size of your inputs, and youwould like to lazily create weights when that value becomes known, some timeafter instantiating the layer.In the Keras API, we recommend creating layer weights in the `build(self,inputs_shape)` method of your layer. Like this:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
###Output
_____no_output_____
###Markdown
The `__call__()` method of your layer will automatically run build the first timeit is called. You now have a layer that's lazy and thus easier to use:
###Code
# At instantiation, we don't know on what inputs this is going to get called
linear_layer = Linear(32)
# The layer's weights are created dynamically the first time the layer is called
y = linear_layer(x)
###Output
_____no_output_____
###Markdown
Implementing `build()` separately as shown above nicely separates creating weightsonly once from using weights in every call. However, for some advanced customlayers, it can become impractical to separate the state creation and computation.Layer implementers are allowed to defer weight creation to the first `__call__()`,but need to take care that later calls use the same weights. In addition, since`__call__()` is likely to be executed for the first time inside a `tf.function`,any variable creation that takes place in `__call__()` should be wrapped in a`tf.init_scope`. Layers are recursively composableIf you assign a Layer instance as an attribute of another Layer, the outer layerwill start tracking the weights created by the inner layer.We recommend creating such sublayers in the `__init__()` method and leave it tothe first `__call__()` to trigger building their weights.
###Code
class MLPBlock(keras.layers.Layer):
def __init__(self):
super(MLPBlock, self).__init__()
self.linear_1 = Linear(32)
self.linear_2 = Linear(32)
self.linear_3 = Linear(1)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.linear_2(x)
x = tf.nn.relu(x)
return self.linear_3(x)
mlp = MLPBlock()
y = mlp(tf.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights
print("weights:", len(mlp.weights))
print("trainable weights:", len(mlp.trainable_weights))
###Output
_____no_output_____
###Markdown
The `add_loss()` methodWhen writing the `call()` method of a layer, you can create loss tensors thatyou will want to use later, when writing your training loop. This is doable bycalling `self.add_loss(value)`:
###Code
# A layer that creates an activity regularization loss
class ActivityRegularizationLayer(keras.layers.Layer):
def __init__(self, rate=1e-2):
super(ActivityRegularizationLayer, self).__init__()
self.rate = rate
def call(self, inputs):
self.add_loss(self.rate * tf.reduce_sum(inputs))
return inputs
###Output
_____no_output_____
###Markdown
These losses (including those created by any inner layer) can be retrieved via`layer.losses`. This property is reset at the start of every `__call__()` tothe top-level layer, so that `layer.losses` always contains the loss valuescreated during the last forward pass.
###Code
class OuterLayer(keras.layers.Layer):
def __init__(self):
super(OuterLayer, self).__init__()
self.activity_reg = ActivityRegularizationLayer(1e-2)
def call(self, inputs):
return self.activity_reg(inputs)
layer = OuterLayer()
assert len(layer.losses) == 0 # No losses yet since the layer has never been called
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # We created one loss value
# `layer.losses` gets reset at the start of each __call__
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # This is the loss created during the call above
###Output
_____no_output_____
###Markdown
In addition, the `loss` property also contains regularization losses createdfor the weights of any inner layer:
###Code
class OuterLayerWithKernelRegularizer(keras.layers.Layer):
def __init__(self):
super(OuterLayerWithKernelRegularizer, self).__init__()
self.dense = keras.layers.Dense(
32, kernel_regularizer=tf.keras.regularizers.l2(1e-3)
)
def call(self, inputs):
return self.dense(inputs)
layer = OuterLayerWithKernelRegularizer()
_ = layer(tf.zeros((1, 1)))
# This is `1e-3 * sum(layer.dense.kernel ** 2)`,
# created by the `kernel_regularizer` above.
print(layer.losses)
###Output
_____no_output_____
###Markdown
These losses are meant to be taken into account when writing training loops,like this:```python Instantiate an optimizer.optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) Iterate over the batches of a dataset.for x_batch_train, y_batch_train in train_dataset: with tf.GradientTape() as tape: logits = layer(x_batch_train) Logits for this minibatch Loss value for this minibatch loss_value = loss_fn(y_batch_train, logits) Add extra losses created during this forward pass: loss_value += sum(model.losses) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights))``` For a detailed guide about writing training loops, see the[guide to writing a training loop from scratch](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch/).These losses also work seamlessly with `fit()` (they get automatically summedand added to the main loss, if any):
###Code
import numpy as np
inputs = keras.Input(shape=(3,))
outputs = ActivityRegularizationLayer()(inputs)
model = keras.Model(inputs, outputs)
# If there is a loss passed in `compile`, the regularization
# losses get added to it
model.compile(optimizer="adam", loss="mse")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
# It's also possible not to pass any loss in `compile`,
# since the model already has a loss to minimize, via the `add_loss`
# call during the forward pass!
model.compile(optimizer="adam")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
###Output
_____no_output_____
###Markdown
The `add_metric()` methodSimilarly to `add_loss()`, layers also have an `add_metric()` methodfor tracking the moving average of a quantity during training.Consider the following layer: a "logistic endpoint" layer.It takes as inputs predictions & targets, it computes a loss which it tracksvia `add_loss()`, and it computes an accuracy scalar, which it tracks via`add_metric()`.
###Code
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
###Output
_____no_output_____
###Markdown
Metrics tracked in this way are accessible via `layer.metrics`:
###Code
layer = LogisticEndpoint()
targets = tf.ones((2, 2))
logits = tf.ones((2, 2))
y = layer(targets, logits)
print("layer.metrics:", layer.metrics)
print("current accuracy value:", float(layer.metrics[0].result()))
###Output
_____no_output_____
###Markdown
Just like for `add_loss()`, these metrics are tracked by `fit()`:
###Code
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam")
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
###Output
_____no_output_____
###Markdown
You can optionally enable serialization on your layersIf you need your custom layers to be serializable as part of a[Functional model](https://www.tensorflow.org/guide/keras/functional/), you can optionally implement a `get_config()`method:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {"units": self.units}
# Now you can recreate the layer from its config:
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
###Output
_____no_output_____
###Markdown
Note that the `__init__()` method of the base `Layer` class takes some keywordarguments, in particular a `name` and a `dtype`. It's good practice to passthese arguments to the parent class in `__init__()` and to include them in thelayer config:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(Linear, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(Linear, self).get_config()
config.update({"units": self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
###Output
_____no_output_____
###Markdown
If you need more flexibility when deserializing the layer from its config, youcan also override the `from_config()` class method. This is the baseimplementation of `from_config()`:```pythondef from_config(cls, config): return cls(**config)```To learn more about serialization and saving, see the complete[guide to saving and serializing models](https://www.tensorflow.org/guide/keras/save_and_serialize/). Privileged `training` argument in the `call()` methodSome layers, in particular the `BatchNormalization` layer and the `Dropout`layer, have different behaviors during training and inference. For suchlayers, it is standard practice to expose a `training` (boolean) argument inthe `call()` method.By exposing this argument in `call()`, you enable the built-in training andevaluation loops (e.g. `fit()`) to correctly use the layer in training andinference.
###Code
class CustomDropout(keras.layers.Layer):
def __init__(self, rate, **kwargs):
super(CustomDropout, self).__init__(**kwargs)
self.rate = rate
def call(self, inputs, training=None):
if training:
return tf.nn.dropout(inputs, rate=self.rate)
return inputs
###Output
_____no_output_____
###Markdown
Privileged `mask` argument in the `call()` methodThe other privileged argument supported by `call()` is the `mask` argument.You will find it in all Keras RNN layers. A mask is a boolean tensor (oneboolean value per timestep in the input) used to skip certain input timestepswhen processing timeseries data.Keras will automatically pass the correct `mask` argument to `__call__()` forlayers that support it, when a mask is generated by a prior layer.Mask-generating layers are the `Embedding`layer configured with `mask_zero=True`, and the `Masking` layer.To learn more about masking and how to write masking-enabled layers, pleasecheck out the guide["understanding padding and masking"](https://www.tensorflow.org/guide/keras/masking_and_padding/). The `Model` classIn general, you will use the `Layer` class to define inner computation blocks,and will use the `Model` class to define the outer model -- the object youwill train.For instance, in a ResNet50 model, you would have several ResNet blockssubclassing `Layer`, and a single `Model` encompassing the entire ResNet50network.The `Model` class has the same API as `Layer`, with the following differences:- It exposes built-in training, evaluation, and prediction loops(`model.fit()`, `model.evaluate()`, `model.predict()`).- It exposes the list of its inner layers, via the `model.layers` property.- It exposes saving and serialization APIs (`save()`, `save_weights()`...)Effectively, the `Layer` class corresponds to what we refer to in theliterature as a "layer" (as in "convolution layer" or "recurrent layer") or asa "block" (as in "ResNet block" or "Inception block").Meanwhile, the `Model` class corresponds to what is referred to in theliterature as a "model" (as in "deep learning model") or as a "network" (as in"deep neural network").So if you're wondering, "should I use the `Layer` class or the `Model` class?",ask yourself: will I need to call `fit()` on it? Will I need to call `save()`on it? If so, go with `Model`. If not (either because your class is just a blockin a bigger system, or because you are writing training & saving code yourself),use `Layer`.For instance, we could take our mini-resnet example above, and use it to builda `Model` that we could train with `fit()`, and that we could save with`save_weights()`: ```pythonclass ResNet(tf.keras.Model): def __init__(self, num_classes=1000): super(ResNet, self).__init__() self.block_1 = ResNetBlock() self.block_2 = ResNetBlock() self.global_pool = layers.GlobalAveragePooling2D() self.classifier = Dense(num_classes) def call(self, inputs): x = self.block_1(inputs) x = self.block_2(x) x = self.global_pool(x) return self.classifier(x)resnet = ResNet()dataset = ...resnet.fit(dataset, epochs=10)resnet.save(filepath)``` Putting it all together: an end-to-end exampleHere's what you've learned so far:- A `Layer` encapsulate a state (created in `__init__()` or `build()`) and somecomputation (defined in `call()`).- Layers can be recursively nested to create new, bigger computation blocks.- Layers can create and track losses (typically regularization losses) as wellas metrics, via `add_loss()` and `add_metric()`- The outer container, the thing you want to train, is a `Model`. A `Model` isjust like a `Layer`, but with added training and serialization utilities.Let's put all of these things together into an end-to-end example: we're goingto implement a Variational AutoEncoder (VAE). We'll train it on MNIST digits.Our VAE will be a subclass of `Model`, built as a nested composition of layersthat subclass `Layer`. It will feature a regularization loss (KL divergence).
###Code
from tensorflow.keras import layers
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
class Encoder(layers.Layer):
"""Maps MNIST digits to a triplet (z_mean, z_log_var, z)."""
def __init__(self, latent_dim=32, intermediate_dim=64, name="encoder", **kwargs):
super(Encoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_mean = layers.Dense(latent_dim)
self.dense_log_var = layers.Dense(latent_dim)
self.sampling = Sampling()
def call(self, inputs):
x = self.dense_proj(inputs)
z_mean = self.dense_mean(x)
z_log_var = self.dense_log_var(x)
z = self.sampling((z_mean, z_log_var))
return z_mean, z_log_var, z
class Decoder(layers.Layer):
"""Converts z, the encoded digit vector, back into a readable digit."""
def __init__(self, original_dim, intermediate_dim=64, name="decoder", **kwargs):
super(Decoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_output = layers.Dense(original_dim, activation="sigmoid")
def call(self, inputs):
x = self.dense_proj(inputs)
return self.dense_output(x)
class VariationalAutoEncoder(keras.Model):
"""Combines the encoder and decoder into an end-to-end model for training."""
def __init__(
self,
original_dim,
intermediate_dim=64,
latent_dim=32,
name="autoencoder",
**kwargs
):
super(VariationalAutoEncoder, self).__init__(name=name, **kwargs)
self.original_dim = original_dim
self.encoder = Encoder(latent_dim=latent_dim, intermediate_dim=intermediate_dim)
self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)
def call(self, inputs):
z_mean, z_log_var, z = self.encoder(inputs)
reconstructed = self.decoder(z)
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1
)
self.add_loss(kl_loss)
return reconstructed
###Output
_____no_output_____
###Markdown
Let's write a simple training loop on MNIST:
###Code
original_dim = 784
vae = VariationalAutoEncoder(original_dim, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
mse_loss_fn = tf.keras.losses.MeanSquaredError()
loss_metric = tf.keras.metrics.Mean()
(x_train, _), _ = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype("float32") / 255
train_dataset = tf.data.Dataset.from_tensor_slices(x_train)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
epochs = 2
# Iterate over epochs.
for epoch in range(epochs):
print("Start of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, x_batch_train in enumerate(train_dataset):
with tf.GradientTape() as tape:
reconstructed = vae(x_batch_train)
# Compute reconstruction loss
loss = mse_loss_fn(x_batch_train, reconstructed)
loss += sum(vae.losses) # Add KLD regularization loss
grads = tape.gradient(loss, vae.trainable_weights)
optimizer.apply_gradients(zip(grads, vae.trainable_weights))
loss_metric(loss)
if step % 100 == 0:
print("step %d: mean loss = %.4f" % (step, loss_metric.result()))
###Output
_____no_output_____
###Markdown
Note that since the VAE is subclassing `Model`, it features built-in trainingloops. So you could also have trained it like this:
###Code
vae = VariationalAutoEncoder(784, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=2, batch_size=64)
###Output
_____no_output_____
###Markdown
Beyond object-oriented development: the Functional APIWas this example too much object-oriented development for you? You can alsobuild models using the [Functional API](https://www.tensorflow.org/guide/keras/functional/). Importantly,choosing one style or another does not prevent you from leveraging componentswritten in the other style: you can always mix-and-match.For instance, the Functional API example below reuses the same `Sampling` layerwe defined in the example above:
###Code
original_dim = 784
intermediate_dim = 64
latent_dim = 32
# Define encoder model.
original_inputs = tf.keras.Input(shape=(original_dim,), name="encoder_input")
x = layers.Dense(intermediate_dim, activation="relu")(original_inputs)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = Sampling()((z_mean, z_log_var))
encoder = tf.keras.Model(inputs=original_inputs, outputs=z, name="encoder")
# Define decoder model.
latent_inputs = tf.keras.Input(shape=(latent_dim,), name="z_sampling")
x = layers.Dense(intermediate_dim, activation="relu")(latent_inputs)
outputs = layers.Dense(original_dim, activation="sigmoid")(x)
decoder = tf.keras.Model(inputs=latent_inputs, outputs=outputs, name="decoder")
# Define VAE model.
outputs = decoder(z)
vae = tf.keras.Model(inputs=original_inputs, outputs=outputs, name="vae")
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)
vae.add_loss(kl_loss)
# Train.
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=3, batch_size=64)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Making new Layers & Models via subclassing View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Setup
###Code
import tensorflow as tf
from tensorflow import keras
###Output
_____no_output_____
###Markdown
The `Layer` class: the combination of state (weights) and some computationOne of the central abstraction in Keras is the `Layer` class. A layerencapsulates both a state (the layer's "weights") and a transformation frominputs to outputs (a "call", the layer's forward pass).Here's a densely-connected layer. It has a state: the variables `w` and `b`.
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_dim, units), dtype="float32"),
trainable=True,
)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(units,), dtype="float32"), trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
###Output
_____no_output_____
###Markdown
You would use a layer by calling it on some tensor input(s), much like a Pythonfunction.
###Code
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
###Output
_____no_output_____
###Markdown
Note that the weights `w` and `b` are automatically tracked by the layer uponbeing set as layer attributes:
###Code
assert linear_layer.weights == [linear_layer.w, linear_layer.b]
###Output
_____no_output_____
###Markdown
Note you also have access to a quicker shortcut for adding weight to a layer:the `add_weight()` method:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
###Output
_____no_output_____
###Markdown
Layers can have non-trainable weightsBesides trainable weights, you can add non-trainable weights to a layer aswell. Such weights are meant not to be taken into account duringbackpropagation, when you are training the layer.Here's how to add and use a non-trainable weight:
###Code
class ComputeSum(keras.layers.Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
x = tf.ones((2, 2))
my_sum = ComputeSum(2)
y = my_sum(x)
print(y.numpy())
y = my_sum(x)
print(y.numpy())
###Output
_____no_output_____
###Markdown
It's part of `layer.weights`, but it gets categorized as a non-trainable weight:
###Code
print("weights:", len(my_sum.weights))
print("non-trainable weights:", len(my_sum.non_trainable_weights))
# It's not included in the trainable weights:
print("trainable_weights:", my_sum.trainable_weights)
###Output
_____no_output_____
###Markdown
Best practice: deferring weight creation until the shape of the inputs is knownOur `Linear` layer above took an `input_dim `argument that was used to computethe shape of the weights `w` and `b` in `__init__()`:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
###Output
_____no_output_____
###Markdown
In many cases, you may not know in advance the size of your inputs, and youwould like to lazily create weights when that value becomes known, some timeafter instantiating the layer.In the Keras API, we recommend creating layer weights in the `build(self,inputs_shape)` method of your layer. Like this:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
###Output
_____no_output_____
###Markdown
The `__call__()` method of your layer will automatically run build the first timeit is called. You now have a layer that's lazy and thus easier to use:
###Code
# At instantiation, we don't know on what inputs this is going to get called
linear_layer = Linear(32)
# The layer's weights are created dynamically the first time the layer is called
y = linear_layer(x)
###Output
_____no_output_____
###Markdown
Layers are recursively composableIf you assign a Layer instance as attribute of another Layer, the outer layerwill start tracking the weights of the inner layer.We recommend creating such sublayers in the `__init__()` method (since thesublayers will typically have a build method, they will be built when theouter layer gets built).
###Code
# Let's assume we are reusing the Linear class
# with a `build` method that we defined above.
class MLPBlock(keras.layers.Layer):
def __init__(self):
super(MLPBlock, self).__init__()
self.linear_1 = Linear(32)
self.linear_2 = Linear(32)
self.linear_3 = Linear(1)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.linear_2(x)
x = tf.nn.relu(x)
return self.linear_3(x)
mlp = MLPBlock()
y = mlp(tf.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights
print("weights:", len(mlp.weights))
print("trainable weights:", len(mlp.trainable_weights))
###Output
_____no_output_____
###Markdown
The `add_loss()` methodWhen writing the `call()` method of a layer, you can create loss tensors thatyou will want to use later, when writing your training loop. This is doable bycalling `self.add_loss(value)`:
###Code
# A layer that creates an activity regularization loss
class ActivityRegularizationLayer(keras.layers.Layer):
def __init__(self, rate=1e-2):
super(ActivityRegularizationLayer, self).__init__()
self.rate = rate
def call(self, inputs):
self.add_loss(self.rate * tf.reduce_sum(inputs))
return inputs
###Output
_____no_output_____
###Markdown
These losses (including those created by any inner layer) can be retrieved via`layer.losses`. This property is reset at the start of every `__call__()` tothe top-level layer, so that `layer.losses` always contains the loss valuescreated during the last forward pass.
###Code
class OuterLayer(keras.layers.Layer):
def __init__(self):
super(OuterLayer, self).__init__()
self.activity_reg = ActivityRegularizationLayer(1e-2)
def call(self, inputs):
return self.activity_reg(inputs)
layer = OuterLayer()
assert len(layer.losses) == 0 # No losses yet since the layer has never been called
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # We created one loss value
# `layer.losses` gets reset at the start of each __call__
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # This is the loss created during the call above
###Output
_____no_output_____
###Markdown
In addition, the `loss` property also contains regularization losses createdfor the weights of any inner layer:
###Code
class OuterLayerWithKernelRegularizer(keras.layers.Layer):
def __init__(self):
super(OuterLayerWithKernelRegularizer, self).__init__()
self.dense = keras.layers.Dense(
32, kernel_regularizer=tf.keras.regularizers.l2(1e-3)
)
def call(self, inputs):
return self.dense(inputs)
layer = OuterLayerWithKernelRegularizer()
_ = layer(tf.zeros((1, 1)))
# This is `1e-3 * sum(layer.dense.kernel ** 2)`,
# created by the `kernel_regularizer` above.
print(layer.losses)
###Output
_____no_output_____
###Markdown
These losses are meant to be taken into account when writing training loops,like this:```python Instantiate an optimizer.optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) Iterate over the batches of a dataset.for x_batch_train, y_batch_train in train_dataset: with tf.GradientTape() as tape: logits = layer(x_batch_train) Logits for this minibatch Loss value for this minibatch loss_value = loss_fn(y_batch_train, logits) Add extra losses created during this forward pass: loss_value += sum(model.losses) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights))``` For a detailed guide about writing training loops, see the[guide to writing a training loop from scratch](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch/).These losses also work seamlessly with `fit()` (they get automatically summedand added to the main loss, if any):
###Code
import numpy as np
inputs = keras.Input(shape=(3,))
outputs = ActivityRegularizationLayer()(inputs)
model = keras.Model(inputs, outputs)
# If there is a loss passed in `compile`, thee regularization
# losses get added to it
model.compile(optimizer="adam", loss="mse")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
# It's also possible not to pass any loss in `compile`,
# since the model already has a loss to minimize, via the `add_loss`
# call during the forward pass!
model.compile(optimizer="adam")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
###Output
_____no_output_____
###Markdown
The `add_metric()` methodSimilarly to `add_loss()`, layers also have an `add_metric()` methodfor tracking the moving average of a quantity during training.Consider the following layer: a "logistic endpoint" layer.It takes as inputs predictions & targets, it computes a loss which it tracksvia `add_loss()`, and it computes an accuracy scalar, which it tracks via`add_metric()`.
###Code
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
###Output
_____no_output_____
###Markdown
Metrics tracked in this way are accessible via `layer.metrics`:
###Code
layer = LogisticEndpoint()
targets = tf.ones((2, 2))
logits = tf.ones((2, 2))
y = layer(targets, logits)
print("layer.metrics:", layer.metrics)
print("current accuracy value:", float(layer.metrics[0].result()))
###Output
_____no_output_____
###Markdown
Just like for `add_loss()`, these metrics are tracked by `fit()`:
###Code
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam")
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
###Output
_____no_output_____
###Markdown
You can optionally enable serialization on your layersIf you need your custom layers to be serializable as part of a[Functional model](https://www.tensorflow.org/guide/keras/functional/), you can optionally implement a `get_config()`method:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {"units": self.units}
# Now you can recreate the layer from its config:
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
###Output
_____no_output_____
###Markdown
Note that the `__init__()` method of the base `Layer` class takes some keywordarguments, in particular a `name` and a `dtype`. It's good practice to passthese arguments to the parent class in `__init__()` and to include them in thelayer config:
###Code
class Linear(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(Linear, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(Linear, self).get_config()
config.update({"units": self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
###Output
_____no_output_____
###Markdown
If you need more flexibility when deserializing the layer from its config, youcan also override the `from_config()` class method. This is the baseimplementation of `from_config()`:```pythondef from_config(cls, config): return cls(**config)```To learn more about serialization and saving, see the complete[guide to saving and serializing models](https://www.tensorflow.org/guide/keras/save_and_serialize/). Privileged `training` argument in the `call()` methodSome layers, in particular the `BatchNormalization` layer and the `Dropout`layer, have different behaviors during training and inference. For suchlayers, it is standard practice to expose a `training` (boolean) argument inthe `call()` method.By exposing this argument in `call()`, you enable the built-in training andevaluation loops (e.g. `fit()`) to correctly use the layer in training andinference.
###Code
class CustomDropout(keras.layers.Layer):
def __init__(self, rate, **kwargs):
super(CustomDropout, self).__init__(**kwargs)
self.rate = rate
def call(self, inputs, training=None):
if training:
return tf.nn.dropout(inputs, rate=self.rate)
return inputs
###Output
_____no_output_____
###Markdown
Privileged `mask` argument in the `call()` methodThe other privileged argument supported by `call()` is the `mask` argument.You will find it in all Keras RNN layers. A mask is a boolean tensor (oneboolean value per timestep in the input) used to skip certain input timestepswhen processing timeseries data.Keras will automatically pass the correct `mask` argument to `__call__()` forlayers that support it, when a mask is generated by a prior layer.Mask-generating layers are the `Embedding`layer configured with `mask_zero=True`, and the `Masking` layer.To learn more about masking and how to write masking-enabled layers, pleasecheck out the guide["understanding padding and masking"](https://www.tensorflow.org/guide/keras/masking_and_padding/). The `Model` classIn general, you will use the `Layer` class to define inner computation blocks,and will use the `Model` class to define the outer model -- the object youwill train.For instance, in a ResNet50 model, you would have several ResNet blockssubclassing `Layer`, and a single `Model` encompassing the entire ResNet50network.The `Model` class has the same API as `Layer`, with the following differences:- It exposes built-in training, evaluation, and prediction loops(`model.fit()`, `model.evaluate()`, `model.predict()`).- It exposes the list of its inner layers, via the `model.layers` property.- It exposes saving and serialization APIs (`save()`, `save_weights()`...)Effectively, the `Layer` class corresponds to what we refer to in theliterature as a "layer" (as in "convolution layer" or "recurrent layer") or asa "block" (as in "ResNet block" or "Inception block").Meanwhile, the `Model` class corresponds to what is referred to in theliterature as a "model" (as in "deep learning model") or as a "network" (as in"deep neural network").So if you're wondering, "should I use the `Layer` class or the `Model` class?",ask yourself: will I need to call `fit()` on it? Will I need to call `save()`on it? If so, go with `Model`. If not (either because your class is just a blockin a bigger system, or because you are writing training & saving code yourself),use `Layer`.For instance, we could take our mini-resnet example above, and use it to builda `Model` that we could train with `fit()`, and that we could save with`save_weights()`: ```pythonclass ResNet(tf.keras.Model): def __init__(self): super(ResNet, self).__init__() self.block_1 = ResNetBlock() self.block_2 = ResNetBlock() self.global_pool = layers.GlobalAveragePooling2D() self.classifier = Dense(num_classes) def call(self, inputs): x = self.block_1(inputs) x = self.block_2(x) x = self.global_pool(x) return self.classifier(x)resnet = ResNet()dataset = ...resnet.fit(dataset, epochs=10)resnet.save(filepath)``` Putting it all together: an end-to-end exampleHere's what you've learned so far:- A `Layer` encapsulate a state (created in `__init__()` or `build()`) and somecomputation (defined in `call()`).- Layers can be recursively nested to create new, bigger computation blocks.- Layers can create and track losses (typically regularization losses) as wellas metrics, via `add_loss()` and `add_metric()`- The outer container, the thing you want to train, is a `Model`. A `Model` isjust like a `Layer`, but with added training and serialization utilities.Let's put all of these things together into an end-to-end example: we're goingto implement a Variational AutoEncoder (VAE). We'll train it on MNIST digits.Our VAE will be a subclass of `Model`, built as a nested composition of layersthat subclass `Layer`. It will feature a regularization loss (KL divergence).
###Code
from tensorflow.keras import layers
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
class Encoder(layers.Layer):
"""Maps MNIST digits to a triplet (z_mean, z_log_var, z)."""
def __init__(self, latent_dim=32, intermediate_dim=64, name="encoder", **kwargs):
super(Encoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_mean = layers.Dense(latent_dim)
self.dense_log_var = layers.Dense(latent_dim)
self.sampling = Sampling()
def call(self, inputs):
x = self.dense_proj(inputs)
z_mean = self.dense_mean(x)
z_log_var = self.dense_log_var(x)
z = self.sampling((z_mean, z_log_var))
return z_mean, z_log_var, z
class Decoder(layers.Layer):
"""Converts z, the encoded digit vector, back into a readable digit."""
def __init__(self, original_dim, intermediate_dim=64, name="decoder", **kwargs):
super(Decoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_output = layers.Dense(original_dim, activation="sigmoid")
def call(self, inputs):
x = self.dense_proj(inputs)
return self.dense_output(x)
class VariationalAutoEncoder(keras.Model):
"""Combines the encoder and decoder into an end-to-end model for training."""
def __init__(
self,
original_dim,
intermediate_dim=64,
latent_dim=32,
name="autoencoder",
**kwargs
):
super(VariationalAutoEncoder, self).__init__(name=name, **kwargs)
self.original_dim = original_dim
self.encoder = Encoder(latent_dim=latent_dim, intermediate_dim=intermediate_dim)
self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)
def call(self, inputs):
z_mean, z_log_var, z = self.encoder(inputs)
reconstructed = self.decoder(z)
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1
)
self.add_loss(kl_loss)
return reconstructed
###Output
_____no_output_____
###Markdown
Let's write a simple training loop on MNIST:
###Code
original_dim = 784
vae = VariationalAutoEncoder(original_dim, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
mse_loss_fn = tf.keras.losses.MeanSquaredError()
loss_metric = tf.keras.metrics.Mean()
(x_train, _), _ = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype("float32") / 255
train_dataset = tf.data.Dataset.from_tensor_slices(x_train)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
epochs = 2
# Iterate over epochs.
for epoch in range(epochs):
print("Start of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, x_batch_train in enumerate(train_dataset):
with tf.GradientTape() as tape:
reconstructed = vae(x_batch_train)
# Compute reconstruction loss
loss = mse_loss_fn(x_batch_train, reconstructed)
loss += sum(vae.losses) # Add KLD regularization loss
grads = tape.gradient(loss, vae.trainable_weights)
optimizer.apply_gradients(zip(grads, vae.trainable_weights))
loss_metric(loss)
if step % 100 == 0:
print("step %d: mean loss = %.4f" % (step, loss_metric.result()))
###Output
_____no_output_____
###Markdown
Note that since the VAE is subclassing `Model`, it features built-in trainingloops. So you could also have trained it like this:
###Code
vae = VariationalAutoEncoder(784, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=2, batch_size=64)
###Output
_____no_output_____
###Markdown
Beyond object-oriented development: the Functional APIWas this example too much object-oriented development for you? You can alsobuild models using the [Functional API](https://www.tensorflow.org/guide/keras/functional/). Importantly,choosing one style or another does not prevent you from leveraging componentswritten in the other style: you can always mix-and-match.For instance, the Functional API example below reuses the same `Sampling` layerwe defined in the example above:
###Code
original_dim = 784
intermediate_dim = 64
latent_dim = 32
# Define encoder model.
original_inputs = tf.keras.Input(shape=(original_dim,), name="encoder_input")
x = layers.Dense(intermediate_dim, activation="relu")(original_inputs)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = Sampling()((z_mean, z_log_var))
encoder = tf.keras.Model(inputs=original_inputs, outputs=z, name="encoder")
# Define decoder model.
latent_inputs = tf.keras.Input(shape=(latent_dim,), name="z_sampling")
x = layers.Dense(intermediate_dim, activation="relu")(latent_inputs)
outputs = layers.Dense(original_dim, activation="sigmoid")(x)
decoder = tf.keras.Model(inputs=latent_inputs, outputs=outputs, name="decoder")
# Define VAE model.
outputs = decoder(z)
vae = tf.keras.Model(inputs=original_inputs, outputs=outputs, name="vae")
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)
vae.add_loss(kl_loss)
# Train.
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=3, batch_size=64)
###Output
_____no_output_____ |
dsa/XVDPU-TRD/vck190_platform/petalinux/xilinx-vck190-base-trd/project-spec/meta-base-trd/recipes-apps/base-trd/base-trd/notebooks/base-trd-apm.ipynb | ###Markdown
![Xilinx Logo](images/xilinx_logo.png "Xilinx Logo") 1. Introduction This notebook demonstrates how to use the APM library for performance monitoring of read and write throughput in the MIPI, accelerator and HDMI pipelines.The APM library configures and reads out the AXI performance monitors (APM) that are added into the PL design. The following is a list of monitoring slots and configured metrics:* HDMI overlay planes 0 to 3 read throughput* HDMI overlay planes 4 to 7 read throughput* Accelerator write throughput* Accelerator read throughput* MIPI write throughputIn this notebook, you will:1. Create a list of desired APM metrics to be recorded2. Plot the data in a real-time graph 2. Imports and Initialization Import all python modules required for this notebook. The ``libxperfmon`` module provides the APM monitoring functionality.
###Code
from IPython.display import clear_output
import libxperfmon
from matplotlib import pyplot as plt
import numpy as np
import time
import os
%matplotlib inline
###Output
_____no_output_____
###Markdown
3. Create and Configure the APMs with their Metrics Define a helper function that determines the platform being used.
###Code
def get_pfm_name():
pfms = {
'vcap_csi' : 'preset_pfm1',
'vcap_gmsl' : 'preset_pfm2',
'vcap_hdmi' : 'preset_pfm3'
}
for p in pfms:
if os.path.exists("/sys/firmware/devicetree/base/amba_pl@0/" + p) == True:
return pfms[p]
###Output
_____no_output_____
###Markdown
Create a list consisting of the APM metrics you want to measure and create the APM object. The APM configuration is platform specific and set via attribute name inside the ``libxperfmon`` library e.g. ``libxperfmon.preset_pfm1`` for platform1.
###Code
APM_LIST = libxperfmon.APM(getattr(libxperfmon,get_pfm_name()))
###Output
_____no_output_____
###Markdown
4. Read and Plot the Memory Throughput Values in Real-Time Create a function ``autolabel`` that prints the current values inside the bar graph.In an infinite loop, read the APM values using the APM object, configure various properties and plot the graph.
###Code
def autolabel(rects):
for rect in rects:
width = rect.get_width()
if width > 0:
ax.text(rect.get_x() + rect.get_width()/2, rect.get_y() + rect.get_height()/2.,
'%.2f' % width, ha='left', va='center', color='white',size='20')
while True:
clear_output(wait=True)
# read APM values and add them to a list
# the values for HDMI overlay planes 0 to 3 and 4 to 7 are added
hdmi_o_rd = round(APM_LIST.port[0].getThroughput(libxperfmon.Gbps)
+ APM_LIST.port[1].getThroughput(libxperfmon.Gbps),2)
pl_accel_rd = round(APM_LIST.port[2].getThroughput(libxperfmon.Gbps),2)
pl_accel_wr = round(APM_LIST.port[3].getThroughput(libxperfmon.Gbps),2)
aie_accel_rd = round(APM_LIST.port[4].getThroughput(libxperfmon.Gbps),2)
aie_accel_wr = round(APM_LIST.port[5].getThroughput(libxperfmon.Gbps),2)
mipi_wr = round(APM_LIST.port[6].getThroughput(libxperfmon.Gbps),2)
read = [
hdmi_o_rd,
pl_accel_rd,
pl_accel_wr,
aie_accel_rd,
aie_accel_wr,
mipi_wr
]
# create matching list of labels
labels = [
'HDMI Output Rd',
'PL Accel Rd',
'PL Accel Wr',
'AIE Accel Rd',
'AIE Accel Wr',
'MIPI / HDMI Wr'
]
fig, ax = plt.subplots()
fig.set_facecolor('#111111') # match color of jupyterlab theme
fig.set_size_inches(12, 6)
x = np.arange(0, 1.5, 0.25) # the label locations
width = 0.2 # the width of the bars
colors = ['g' for i in range(len(labels))]
rects1 = ax.barh(x, read, width, color=colors) # plot bars
autolabel(rects1) # print values inside bars
ax.set_title('Memory Throughput (Gbps)', color='white', size='30')
ax.set_facecolor('#111111') # match color of jupyterlab theme
ax.set_yticks(x)
ax.set_yticklabels(labels, color='white', size='20') # print labels
plt.tight_layout()
plt.xlim([0, 16])
plt.show()
###Output
_____no_output_____ |
Yahoo Finance/Build a stock market brief/S01E02-stock-moving-averages.ipynb | ###Markdown
Build a stock market brief - S01E02-stock-moving-averages The moving averages are one of the most common way to follow stock trends and anticipate their variations. This is one of the techniques used by traders to know better when to buy or sell.``- if the 20MA curve is above 50MA, you should buy- if the 50MA curve is above 20MA, you should sell
###Code
stock = "TSLA"
###Output
_____no_output_____
###Markdown
Import packages
###Code
import datetime as dt
import matplotlib.pyplot as plt
import pandas as pd
import pandas_datareader.data as web
###Output
_____no_output_____
###Markdown
Import data
###Code
date_from = dt.datetime(2016,1,1)
date_to = dt.datetime.today()
df=web.DataReader(stock,'yahoo',date_from,date_to)
df.head(2)
###Output
_____no_output_____
###Markdown
Calculate moving averages
###Code
df["20ma"]= df["Close"].rolling(window=20).mean()
df["50ma"]= df["Close"].rolling(window=50).mean()
###Output
_____no_output_____
###Markdown
Visualize data anlaysis
###Code
fig,ax=plt.subplots(figsize=(16,9))
ax.plot(df.index,df['Close'],label=f'{stock}')
ax.plot(df.index,df['20ma'],label="Moving av. 20",color="green")
ax.plot(df.index,df['50ma'],label="Moving av. 50",color="red")
ax.set_xlabel("Date")
ax.set_ylabel("Closing value")
ax.legend()
###Output
_____no_output_____
###Markdown
Build a stock market brief - S01E02-stock-moving-averages The moving averages are one of the most common way to follow stock trends and anticipate their variations. This is one of the techniques used by traders to know better when to buy or sell.``- if the 20MA curve is above 50MA, you should buy- if the 50MA curve is above 20MA, you should sell
###Code
stock = "TSLA"
###Output
_____no_output_____
###Markdown
Import packages
###Code
import datetime as dt
import matplotlib.pyplot as plt
import pandas as pd
import pandas_datareader.data as web
###Output
_____no_output_____
###Markdown
Import data
###Code
date_from = dt.datetime(2016,1,1)
date_to = dt.datetime.today()
df=web.DataReader(stock,'yahoo',date_from,date_to)
df.head(2)
###Output
_____no_output_____
###Markdown
Calculate moving averages
###Code
df["20ma"]= df["Close"].rolling(window=20).mean()
df["50ma"]= df["Close"].rolling(window=50).mean()
###Output
_____no_output_____
###Markdown
Visualize data anlaysis
###Code
fig,ax=plt.subplots(figsize=(16,9))
ax.plot(df.index,df['Close'],label=f'{stock}')
ax.plot(df.index,df['20ma'],label="Moving av. 20",color="green")
ax.plot(df.index,df['50ma'],label="Moving av. 50",color="red")
ax.set_xlabel("Date")
ax.set_ylabel("Closing value")
ax.legend()
###Output
_____no_output_____ |
deprecated/code/SOMOSPIE.ipynb | ###Markdown
SOMOSPIEMigrating code to a Jupyter Notebook: converted bash shell scripts to cells that call on the subscript files.https://docs.python.org/2/library/subprocess.htmlText from the paper.https://github.com/TauferLab/Src_SoilMoisture/tree/master/2018_BigData/docs/2018paper AbstractThe current availability of soil moisture data over large areas comes from satellite remote sensing technologies (i.e., radar-based systems), but these data have coarse resolution and often exhibit large spatial information gaps. Where data are too coarse or sparse for a given need (e.g., precision agriculture), one can leverage machine-learning techniques coupled with other sources of environmental information (e.g., topography) to generate gap-free information and at a finer spatial resolution (i.e., increased granularity). To this end, we develop a spatial inference engine consisting of modular stages for processing spatial environmental data, generating predictions with machine-learning techniques, and analyzing these predictions. We demonstrate the functionality of this approach and the effects of data processing choices via multiple prediction maps over a United States ecological region with a highly diverse soil moisture profile (i.e., the Middle Atlantic Coastal Plains). The relevance of our work derives from a pressing need to improve the spatial representation of soil moisture for applications in environmental sciences (e.g., ecological niche modeling, carbon monitoring systems, and other Earth system models) and precision agriculture (e.g., optimizing irrigation practices and other land management decisions). OverviewWe build a modular SOil MOisture SPatial Inference Engine (SOMOSPIE) for prediction of missing soil moisture information. SOMOSPIE includes three main stages, illustrated below: (1) data processing to select a region of interest, incorporate predictive factors such as topographic parameters, and reduce data redundancy for these new factors; (2) soil moisture prediction with three different machine learning methods (i.e., kNN, HYPPO, and RF); and (3) analysis and visualization of the prediction outputs.![inference-engine](../figs/inference-engine.png) User InputMake changes to the cell below, then in the "Cell" menu at the top, select "Run All".
###Code
# Here the user specify the working directory...
START = "../"
# ... the subfolder with the modular scripts...
CODE = "code/"
# ... the subfolder with the data...
DATA = "data/"
# ... the subfolder for output.
OUTPUT = "out/"
YEAR = 2016
# Assuming SM_FILE below has multiple months of SM data,
# specify the month here (1=January, ..., 12=December)
# The generated predictions will go in a subfolder of the data folder named by this number.
# Set to 0 if train file is already just 3-columns (lat, lon, sm).
MONTH = 4
#############################
# Within the data folder...
# ... there should be a subfolder with/for training data...
TRAIN_DIR = f"{YEAR}/t/"#-100000"
# ... and a subfolder with/for evaluation data.
EVAL_DIR = f"{YEAR}/e/"
# THE FOLLOWING 3 THINGS WILL ONLY BE USED IF MAKE_T_E = 1.
# Specify the location of the file with sm data.
# Use an empty string or False if the train folder is already populated.
SM_FILE = f"{YEAR}/{YEAR}_ESA_monthly.rds"
# Specify location of eval coordinates needing covariates attached.
# An empty string or False will indicate that the eval folder is already populated.
EVAL_FILE = f""#{YEAR}/{MONTH}/ground_sm_means_CONUS.csv"
# Specify location of the file with covariate data.
# An empty string or False will indicate that covariates are already attached to train and eval files.
COV_FILE = "USA_topo.tif"#8.5_topo.tif"#6.2_topo.tif"#
##########################
# If the Train and Eval files need to be generated, set MAKE_T_E = 1.
MAKE_T_E = 0
# If you wish to perform PCA, set USE_PCA = 1; otherwise USE_PCA = 0.
USE_PCA = 0
# Compute residuals from original test data? Set to 1.
# Split off (e.g.) 25% of the original for test for validation? Set to 1.25
# Use the EVAL_FILE as truth for validation? Set to 2.
# Split off a fraction of
VALIDATE = 1.25
RAND_SEED = 0 #0 for new, random seed, to be found in log file
# Create images?
USE_VIS = 1
# Specify the ecoregions to cut out of the sm data.
#REG_LIST = ["6.2.10", "6.2.12", "6.2.13", "6.2.14"]
#REG_LIST = [f"6.2"]#.{l3}" for l3 in range(3, 16) if l3!=6]
REG_LIST = ["8.5.1"]#, "8.5.2"]#, "8.5.3"]#"8.5",
# Specify the number of km of a buffer you want on the training data.
BUFFER = 0#100000
# Dictionary with a models as keys and model-specific parameter:arglist dictionaries as values.
MODICT = {
# "1NN":{"-p":[1]},
# "KKNN":{"-k":[10]},
"RF":{},
# "HYPPO":{"-p":[1], "-k":[10], "-D":[3], "-v":[2]},
# "UNMODEL":{}
}
###Output
_____no_output_____
###Markdown
Libraries and utility functionsMisc. Python functions to assist all the processes below.
###Code
# Required packages
# R: raster, caret, quantregForest, rgdalless, kknn, rasterVis
# Python2: pandas, sklearn, argparse, sys, numpy, itertools, random,
# scipy, matplotlib, re, ipykernel
# Python3: argparse, re, itertools, random, scipy, ipykernel
import pathlib, proc
from subprocess import Popen
# https://docs.python.org/2/library/os.html#files-and-directories
from os import listdir, chdir
from __utils import *
# The following are in __utils
#def bash(*argv):
# call([str(arg) for arg in argv])
#
#def append_to_folder(folder_path, suffix):
# if type(folder_path)==str:
# return folder_path.rstrip("/") + str(suffix)
# else:
# folder = folder_path.name + str(suffix)
# return folder_path.parent.joinpath(folder)
###Output
_____no_output_____
###Markdown
Stage 1: Curating Data
###Code
from __A_curate import curate
###Output
_____no_output_____
###Markdown
Stage 2: Generating a model; making predictions
###Code
from __B_model import model
###Output
_____no_output_____
###Markdown
Stage 3: Analysis and Visualization
###Code
from __C_analyze import analysis
from __D_visualize import visualize
###Output
_____no_output_____
###Markdown
Wrapper Script
###Code
########################################
# Wrapper script for most of the workflow
#
# Arguments:
# START directory of folder containing both train and predi folder
# the train folder contains regional files
# the predi folder must contain regional files
# with the same names as in the train folder
START = pathlib.Path(START).resolve()
print(f"Starting folder: {START}\n")
# Set the working directory to the code subfolder, for running the scipts
chdir(pathlib.Path(START, CODE))
# Change data files and folders to full paths
DATA = START.joinpath(DATA)
if MAKE_T_E:
if SM_FILE:
SM_FILE = DATA.joinpath(SM_FILE)
if not SM_FILE.exists():
print(f"ERROR! Specified SM_FILE does not exist: {SM_FILE}")
if COV_FILE:
COV_FILE = DATA.joinpath(COV_FILE)
if not COV_FILE.exists():
print(f"ERROR! Specified COV_FILE does not exist: {COV_FILE}")
if EVAL_FILE:
EVAL_FILE = DATA.joinpath(EVAL_FILE)
if not EVAL_FILE.exists():
print(f"ERROR! Specified EVAL_FILE does not exist: {EVAL_FILE}")
else:
SM_FILE = ""
COV_FILE = ""
EVAL_FILE = ""
TRAIN_DIR = DATA.joinpath(TRAIN_DIR)
EVAL_DIR = DATA.joinpath(EVAL_DIR)
OUTPUT = START.joinpath(OUTPUT).joinpath(str(YEAR))
print(f"Original training data in: {TRAIN_DIR}")
print(f"Original evaluation data in: {EVAL_DIR}")
# ... so we can suffix them at will
MNTH_SUFX = f"-{MONTH}"
##########################################
# 1 Data Processing
# ORIG is the sm data before any filtering, for use with analysis()
# TRAIN is the training set after filtering and pca, if specified
# EVAL is the evaluation set after filtering and pca, if specified
curate_input = [OUTPUT, SM_FILE, COV_FILE, EVAL_FILE, REG_LIST, BUFFER,
TRAIN_DIR, MONTH, EVAL_DIR, USE_PCA, VALIDATE, RAND_SEED]
print(f"curate(*{curate_input})")
ORIG, TRAIN, EVAL = curate(*curate_input)
print(f"Curated training data in: {TRAIN}")
print(f"Curated evaluation data in: {EVAL}")
if len(listdir(TRAIN)) != len(listdir(EVAL)):
print(listdir(TRAIN))
print(listdir(EVAL))
raise Exception("We've got a problem! TRAIN and EVAL should have the same contents.")
##########################################
# 2 Modeling
PRED = OUTPUT.joinpath(str(MONTH))
NOTE = ""
if BUFFER:
NOTE += f"-{BUFFER}"
if USE_PCA:
NOTE += "-PCA"
model_input = [0, TRAIN, EVAL, PRED, MODICT, NOTE]
print(f"model(*{model_input})")
model(*model_input)
##########################################
# 3 Analysis & Visualization
for region in REG_LIST:
#LOGS = os.path.join(PRED, region, SUB_LOGS,"")
if VALIDATE:
analysis_input = [region, PRED, ORIG, VALIDATE]
print(f"analysis(*{analysis_input})")
analysis(*analysis_input)
if USE_VIS:
# Specify the input data folder and the output figures folder
DATS = PRED.joinpath(region)
OUTS = DATS.joinpath(SUB_FIGS)
visualize_input = [DATS, OUTS, 1, VALIDATE, 1, 0]
print(f"visualize(*{visualize_input})")
visualize(*visualize_input)
###Output
Starting folder: /home/dror/Src_SoilMoisture/SOMOSPIE
Original training data in: /home/dror/Src_SoilMoisture/SOMOSPIE/data/2016/t
Original evaluation data in: /home/dror/Src_SoilMoisture/SOMOSPIE/data/2016/e
curate(*[PosixPath('/home/dror/Src_SoilMoisture/SOMOSPIE/out/2016'), '', '', '', ['8.5.1'], 0, PosixPath('/home/dror/Src_SoilMoisture/SOMOSPIE/data/2016/t'), 4, PosixPath('/home/dror/Src_SoilMoisture/SOMOSPIE/data/2016/e'), 0, 1.25, 0])
Curation log file: /home/dror/Src_SoilMoisture/SOMOSPIE/out/2016/proc-log4.txt
Curated training data in: /home/dror/Src_SoilMoisture/SOMOSPIE/data/2016/t-postproc
Curated evaluation data in: /home/dror/Src_SoilMoisture/SOMOSPIE/data/2016/e-postproc
model(*[0, PosixPath('/home/dror/Src_SoilMoisture/SOMOSPIE/data/2016/t-postproc'), PosixPath('/home/dror/Src_SoilMoisture/SOMOSPIE/data/2016/e-postproc'), PosixPath('/home/dror/Src_SoilMoisture/SOMOSPIE/out/2016/4'), {'RF': {}}, ''])
analysis(*['8.5.1', PosixPath('/home/dror/Src_SoilMoisture/SOMOSPIE/out/2016/4'), PosixPath('/home/dror/Src_SoilMoisture/SOMOSPIE/data/2016/original_sm'), 1.25])
visualize(*[PosixPath('/home/dror/Src_SoilMoisture/SOMOSPIE/out/2016/4/8.5.1'), PosixPath('/home/dror/Src_SoilMoisture/SOMOSPIE/out/2016/4/8.5.1/figures'), 1, 1.25, 1, 0])
Opening log: /home/dror/Src_SoilMoisture/SOMOSPIE/out/2016/4/8.5.1/logs/RF.txt
Saving image to /home/dror/Src_SoilMoisture/SOMOSPIE/out/2016/4/8.5.1/figures/predictions/RF-plot.png
Saving image to /home/dror/Src_SoilMoisture/SOMOSPIE/out/2016/4/8.5.1/figures/residuals/RF-plot.png
|
tutorials/textbook/01_IQPE.ipynb | ###Markdown
Iterative Quantum Phase Estimation AlgorithmThe goal of this tutorial is to understand how the Iterative Phase Estimation (IPE) algorithm works, why would we use the IPE algorithm instead of the QPE (Quantum Phase Estimation) algorithm and how to build it with Qiskit using the same circuit exploiting reset gate and the `c_if` method that allows to apply gates conditioned by the values stored in a classical register, resulting from previous measurements.**References**- [Section 2 of Lab 4: Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm) - [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html)
###Code
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute, assemble, Aer
from qiskit.tools.visualization import plot_histogram
from math import pi
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Conditioned gates: the c_if method Before starting the IPE algorithm, we will give a brief tutorial about the Qiskit conditional method, c_if, as it goes into building the IPE circuit.`c_if` is a function (actually a method of the gate class) to perform conditioned operations based on the value stored previously in a classical register. With this feature you can apply gates after a measurement in the same circuit conditioned by the measurement outcome.For example, the following code will execute the $X$ gate if the value of the classical register is $0$.
###Code
q = QuantumRegister(1,'q')
c = ClassicalRegister(1,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.measure(0,0)
qc.x(0).c_if(c, 0)
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We highlight that the method c_if expects as the first argument a whole classical register, not a single classical bit (or a list of classical bits), and as the second argument a value in decimal representation (a non-negative integer), not the value of a single bit, 0, or 1 (or a list/string of binary digits).Let's make another example. Consider that we want to perform a bit flip on the third qubit after the measurements in the following circuit, when the results of the measurement of $q_0$ and $q_1$ are both $1$.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.h(q[1])
qc.h(q[2])
qc.barrier()
qc.measure(q,c)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
We want to apply the $X$ gate, only if both the results of the measurement of $q_0$ and $q_1$ are $1$. We can do this using the c_if method, conditioning the application of $X$ depending on the value passed as argument to c_if.We will have to encode the value to pass to the c_if method such that it will check the values 011 and 111 (in binary representation), since it does not matter what is in the rightmost position.The 2 integer values in decimal representation: We can check the solutions using the bin() method in python (the prefix `0b` indicates the binary format).
###Code
print(bin(3))
print(bin(7))
###Output
0b11
0b111
###Markdown
So we have to apply $X$ to $q_2$ using c_if two times, one for each value corresponding to 011 and 111.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.h(1)
qc.h(2)
qc.barrier()
qc.measure(q,c)
qc.x(2).c_if(c, 3) # for the 011 case
qc.x(2).c_if(c, 7) # for the 111 case
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
IPEThe motivation for using the IPE algorithm is that QPE algorithm works fine for short depth circuits but when the circuit starts to grow, it doesn't work properly due to gate noise and decoherence times.The detailed explanation of how the algorithm works can be found in [Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm). To understand QPE in depth, you can see also [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html). IPE example with a 1-qubit gate for $U$We want to apply the IPE algorithm to estimate the phase for a 1-qubit operator $U$. For example, here we use the $S$-gate.Let's apply the IPE algorithm to estimate the phase for $S$-gate.Its matrix is $$ S = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{2}\\ \end{bmatrix}$$That is, the $S$-gate adds a phase $\pi/2$ to the state $|1\rangle$, leaving unchanged the phase of the state $|0\rangle$$$ S|1\rangle = e^\frac{i\pi}{2}|1\rangle $$In the following, we will use the notation and terms used in [Section 2 of lab 4](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm).Let's consider to estimate the phase $\phi=\frac{\pi}{2}$ for the eigenstate $|1\rangle$, we should find $\varphi=\frac{1}{4}$ (where $\phi = 2 \pi \varphi$). Therefore to estimate the phase we need exactly 2 phase bits, i.e. $m=2$, since $1/2^2=1/4$. So $\varphi=0.\varphi_1\varphi_2$.Remember from the theory that for the IPE algorithm, $m$ is also the number of iterations, so we need only $2$ iterations or steps.First, we initialize the circuit. IPE works with only 1 auxiliary qubit, instead of $m$ counting qubits of the QPE algorithm. Therefore, we need 2 qubits, 1 auxiliary qubit and 1 for the eigenstate of $U$-gate, and a classical register of 2 bits, for the phase bits $\varphi_1$, $\varphi_2$.
###Code
nq = 2
m = 2
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc_S = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, that is, the first iteration of the algorithm, to estimate the least significant phase bit $\varphi_m$, in this case $\varphi_2$. For the first step we have 3 sub-steps:- initialization- application of the Controlled-$U$ gates- measure of the auxiliary qubit in x-basis InitializationThe initialization consists of application the Hadamard gate to the auxiliary qubit and the preparation of the eigenstate $|1\rangle$.
###Code
qc_S.h(0)
qc_S.x(1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply $2^t$ times the Controlled-$U$ operators (see also in the docs [Two qubit gates](https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.htmlTwo-qubit-gates)), that, in this example, is the Controlled-$S$ gate ($CS$ for short).To implement $CS$ in the circuit, since $S$ is a phase gate, we can use the controlled phase gate $\text{CP}(\theta)$, with $\theta=\pi/2$.
###Code
cu_circ = QuantumCircuit(2)
cu_circ.cp(pi/2,0,1)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{CP}(\pi/2)$. Since for the first step $t=m-1$, and $m=2$, we have $2^t=2$.
###Code
for _ in range(2**(m-1)):
qc_S.cp(pi/2,0,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurement of the auxiliary qubit in x-basis. So we will define a function to perform the x_measure and then apply it.
###Code
def x_measurement(qc, qubit, cbit):
"""Measure 'qubit' in the X-basis, and store the result in 'cbit'"""
qc.h(qubit)
qc.measure(qubit, cbit)
###Output
_____no_output_____
###Markdown
In this way we obtain the phase bit $\varphi_2$ and store it in the classical bit $c_0$.
###Code
x_measurement(qc_S, q[0], c[0])
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd step)Now we build the quantum circuit for the other remaining steps, in this example, only the second one.In these steps we have 4 sub-steps: the 3 sub-steps as in the first step and, in the middle, the additional step of the phase correction- initialization with reset- phase correction- application of the Controlled-$U$ gates- measure of the auxiliary qubit in x-basis Initialization with resetAs we want to perform an iterative algorithm in the same circuit, we need to reset the auxiliary qubit $q0$ after the measument gate and initialize it again as before to recycle the qubit.
###Code
qc_S.reset(0)
qc_S.h(0)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)As seen in the theory, in order to extract the phase bit $\varphi_{1}$, we perform a phase correction of $-\pi\varphi_2/2$.Of course, we need to apply the phase correction in the circuit only if the phase bit $\varphi_2=1$, i.e. we have to apply the phase correction of $-\pi/2$ only if the classical bit $c_0$ is 1.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_2$) using the `c_if` method.So as we saw in the first part of this tutorial, we have to use the `c_if` method with a value of 1, as $1_{10} = 001_{2}$ (the subscripts $_{10}$ and $_2$ indicate the decimal and binary representations).
###Code
qc_S.p(-pi/2,0).c_if(c,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=1$. So we apply $\text{CP}(\pi/2)$ once. And then we perform the x-measurement of the qubit $q_0$, storing the result, the phase bit $\varphi_1$, in the bit $c_1$ of classical register.
###Code
## 2^t c-U operations (with t=m-2)
for _ in range(2**(m-2)):
qc_S.cp(pi/2,0,1)
x_measurement(qc_S, q[0], c[1])
###Output
_____no_output_____
###Markdown
Et voilà, we have our final circuit
###Code
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's execute the circuit with the `qasm_simulator`, the simulator without noise that run locally.
###Code
sim = Aer.get_backend('qasm_simulator')
count0 = execute(qc_S, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
In the picture we have the same histograms but on the left we have on the x-axis the string with phase bits $\varphi_1$, $\varphi_2$ and on the right the actual phase $\varphi$ in decimal representation.As we expected we have found $\varphi=\frac{1}{4}=0.25$ with a $100\%$ probability. IPE example with a 2-qubit gateNow, we want to apply the IPE algorithm to estimate the phase for a 2-qubit gate $U$. For this example, let's consider the controlled version of the $T$ gate, i.e. the gate $U=\textrm{Controlled-}T$ (that from now we will express more compactly with $CT$). Its matrix is$$ CT = \begin{bmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & e^\frac{i\pi}{4}\\ \end{bmatrix} $$That is, the $CT$ gate adds a phase $\pi/4$ to the state $|11\rangle$, leaving unchanged the phase of the other computational basis states $|00\rangle$, $|01\rangle$, $|10\rangle$.Let's consider to estimate the phase $\phi=\pi/4$ for the eigenstate $|11\rangle$, we should find $\varphi=1/8$, since $\phi = 2 \pi \varphi$. Therefore to estimate the phase we need exactly 3 classical bits, i.e. $m=3$, since $1/2^3=1/8$. So $\varphi=0.\varphi_1\varphi_2\varphi_3$.As done with the example for the 1-qubit $U$ operator we will go through the same steps but this time we will have $3$ steps since $m=3$, and we will not repeat all the explanations. So for details see the above example for 1-qubit $U$ gate.First, we initialize the circuit with 3 qubits, 1 for the auxiliary qubit and 2 for the 2-qubit gate, and 3 classical bits to store the phase bits $\varphi_1$, $\varphi_2$, $\varphi_3$.
###Code
nq = 3 # number of qubits
m = 3 # number of classical bits
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, to estimate the least significant phase bit $\varphi_m=\varphi_3$. InitializationWe initialize the auxiliary qubit and the other qubits with the eigenstate $|11\rangle$.
###Code
qc.h(0)
qc.x([1,2])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply multiple times the $CU$ operator, that, in this example, is the Controlled-$CT$ gate ($CCT$ for short).To implement $CCT$ in the circuit, since $T$ is a phase gate, we can use the multi-controlled phase gate $\text{MCP}(\theta)$, with $\theta=\pi/4$.
###Code
cu_circ = QuantumCircuit(nq)
cu_circ.mcp(pi/4,[0,1],2)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{MCP}(\pi/4)$. Since for the first step $t=m-1$ and $m=3$, we have $2^t=4$.
###Code
for _ in range(2**(m-1)):
qc.mcp(pi/4,[0,1],2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurement of the auxiliary qubit in x-basis.We can use the `x_measurement` function defined above in the example for 1-qubit gate. In this way we have obtained the phase bit $\varphi_3$ and stored it in the classical bit $c_0$.
###Code
x_measurement(qc, q[0], c[0])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd, 3rd)Now we build the quantum circuit for the other remaining steps, the second and the third ones.As said in the first example, in these steps we have the additional sub-step of the phase correction. Initialization with reset
###Code
qc.reset(0)
qc.h(0)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)In order to extract the phase bit $\varphi_{2}$, we perform a phase correction of $-\pi\varphi_3/2$.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_3$).
###Code
qc.p(-pi/2,0).c_if(c,1)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=2$. So we apply $\text{MCP}(\pi/4)$ $2$ times. And then we perform the x-measurement of the qubit $q_0$, storing the phase bit $\varphi_2$ in the bit $c_1$.
###Code
for _ in range(2**(m-2)):
qc.mcp(pi/4,[0,1],2)
x_measurement(qc, q[0], c[1])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
All substeps of the 3rd stepFor the 3rd and last step, we perform the reset and initialization of the auxiliary qubit as done in the second step.Then at the 3rd step we have to perform the phase correction of $-2\pi 0.0\varphi_{2}\varphi_{3}= -2\pi \left(\frac{\varphi_2}{4}+\frac{\varphi_3}{8}\right)=-\frac{\varphi_2\pi}{2}-\frac{ \varphi_3\pi}{4}$, thus we have to apply 2 conditioned phase corrections, one conditioned by $\varphi_3$ ($=c_0$) and the other by $\varphi_2$($=c_1$). To do this we have to apply the following:- gate $P(-\pi/4)$ conditioned by $c_0=1$, that is, by $c=001$ (c_if with value $1$)- gate $P(-\pi/2)$ conditioned by $c_1=1$, that is, the gate is applied when $c=010$ (c_if with values $2$)- gate $P(-3\pi/4)$ conditioned by $c_1=1$ and $c_0=1$ that is, the gate is applied when $c=011$ (c_if with values $3$)Next, the $CU$ operations: we apply $2^t$ times the $\text{MCP}(\pi/4)$ gate and since at the 3rd step $t=m-3=0$, we apply the gate only once.
###Code
# initialization of qubit q0
qc.reset(0)
qc.h(0)
# phase correction
qc.p(-pi/4,0).c_if(c,1)
qc.p(-pi/2,0).c_if(c,2)
qc.p(-3*pi/4,0).c_if(c,3)
# c-U operations
for _ in range(2**(m-3)):
qc.mcp(pi/4,[0,1],2)
# X measurement
qc.h(0)
qc.measure(0,2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Now, we execute the circuit with the simulator without noise.
###Code
count0 = execute(qc, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
fig.tight_layout()
###Output
_____no_output_____
###Markdown
We have obtained $100\%$ probability to find $\varphi=0.125$, that is, $1/8$, as expected.
###Code
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
/opt/miniconda3/envs/qiskit/lib/python3.9/site-packages/qiskit/aqua/operators/operator_globals.py:48: DeprecationWarning: `from_label` is deprecated and will be removed no earlier than 3 months after the release date. Use Pauli(label) instead.
X = make_immutable(PrimitiveOp(Pauli.from_label('X')))
###Markdown
Iterative Quantum Phase Estimation AlgorithmThe goal of this tutorial is to understand how the Iterative Phase Estimation (IPE) algorithm works, why would we use the IPE algorithm instead of the QPE (Quantum Phase Estimation) algorithm and how to build it with Qiskit using the same circuit exploiting reset gate and the `c_if` method that allows to apply gates conditioned by the values stored in a classical register, resulting from previous measurements.**References**- [Section 2 of Lab 4: Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm) - [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html)
###Code
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute, assemble, Aer
from qiskit.tools.visualization import plot_histogram
from math import pi
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Conditined gates: the c_if method Before starting the IPE algorithm, we will give a brief tutorial about the Qiskit conditional method, c_if, as it goes into building the IPE circuit.`c_if` is a function (actually a method of the gate class) to perform conditioned operations based on the value stored previously in a classical register. With this feature you can apply gates after a measurement in the same circuit conditioned by the measurement outcome.For example, the following code will execute the $X$ gate if the value of the classical register is $0$.
###Code
q = QuantumRegister(1,'q')
c = ClassicalRegister(1,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.measure(0,0)
qc.x(0).c_if(c, 0)
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We highlight that the method c_if expects as the first argument a whole classical register, not a single classical bit (or a list of classical bits), and as the second argument a value in decimal representation (a non-negative integer), not the value of a single bit, 0, or 1 (or a list/string of binary digits).Let's make another example. Consider that we want to perform a bit flip on the third qubit after the measurements in the following circuit, when the results of the measurement of $q_0$ and $q_1$ are both $1$.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.h(q[1])
qc.h(q[2])
qc.barrier()
qc.measure(q,c)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
We want to apply the $X$ gate, only if both the results of the measurement of $q_1$ and $q_2$ are $1$. We can do this using the c_if method, conditioning the application of $X$ depending on the value passed as argument to c_if.We will have to encode the value to pass to the c_if method such that it will check the values 011 and 111 (in binary representation), since it does not matter what is in the rightmost position.The 2 integer values in decimal representation: We can check the solutions using the bin() method in python (the prefix `0b` indicates the binary format).
###Code
print(bin(3))
print(bin(7))
###Output
0b11
0b111
###Markdown
So we have to apply $X$ to $q_2$ using c_if two times, one for each value corresponding to 011 and 111.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.h(1)
qc.h(2)
qc.barrier()
qc.measure(q,c)
qc.x(2).c_if(c, 3) # for the 011 case
qc.x(2).c_if(c, 7) # for the 111 case
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
IPEThe motivation for using the IPE algorithm is that QPE algorithm works fine for short depth circuits but when the circuit starts to grow, it doesn't work properly due to gate noise and decoherence times.The detailed explanation of how the algorithm works can be found in [Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm). To understand QPE in depth, you can see also [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html). IPE example with a 1-qubit gate for $U$We want to apply the IPE algorithm to estimate the phase for a 1-qubit operator $U$. For example, here we use the $S$-gate.Let's apply the IPE algorithm to estimate the phase for $S$-gate.Its matrix is $$ S = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{2}\\ \end{bmatrix}$$That is, the $S$-gate adds a phase $\pi/2$ to the state $|1\rangle$, leaving unchanged the phase of the state $|0\rangle$$$ S|1\rangle = e^\frac{i\pi}{2}|1\rangle $$In the following, we will use the notation and terms used in [Section 2 of lab 4](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm).Let's consider to estimate the phase $\phi=\frac{\pi}{2}$ for the eigenstate $|1\rangle$, we should find $\varphi=\frac{1}{4}$ (where $\phi = 2 \pi \varphi$). Therefore to estimate the phase we need exactly 2 phase bits, i.e. $m=2$, since $1/2^2=1/4$. So $\varphi=0.\varphi_1\varphi_2$.Remember from the theory that for the IPE algorithm, $m$ is also the number of iterations, so we need only $2$ iterations or steps.First, we initialize the circuit. IPE works with only 1 auxiliary qubit, instead of $m$ counting qubits of the QPE algorithm. Therefore, we need 2 qubits, 1 auxiliary qubit and 1 for the eigenstate of $U$-gate, and a classical register of 2 bits, for the phase bits $\varphi_1$, $\varphi_2$.
###Code
nq = 2
m = 2
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc_S = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, that is, the first iteration of the algorithm, to estimate the least significant phase bit $\varphi_m$, in this case $\varphi_2$. For the first step we have 3 sub-steps:- initialization- application of the Controlled-$U$ gates- measure of the auxiliary qubit in x-basis InitializationThe initialization consists of application the Hadamard gate to the auxiliary qubit and the preparation of the eigenstate $|1\rangle$.
###Code
qc_S.h(0)
qc_S.x(1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply $2^t$ times the Controlled-$U$ operators (see also in the docs [Two qubit gates](https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.htmlTwo-qubit-gates)), that, in this example, is the Controlled-$S$ gate ($CS$ for short).To implement $CS$ in the circuit, since $S$ is a phase gate, we can use the controlled phase gate $\text{CP}(\theta)$, with $\theta=\pi/2$.
###Code
cu_circ = QuantumCircuit(2)
cu_circ.cp(pi/2,0,1)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{CP}(\pi/2)$. Since for the first step $t=m-1$, and $m=2$, we have $2^t=2$.
###Code
for _ in range(2**(m-1)):
qc_S.cp(pi/2,0,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurenment of the auxiliary qubit in x-basis. So we will define a function to perform the x_measure and then apply it.
###Code
def x_measurement(qc, qubit, cbit):
"""Measure 'qubit' in the X-basis, and store the result in 'cbit'"""
qc.h(qubit)
qc.measure(qubit, cbit)
###Output
_____no_output_____
###Markdown
In this way we obtain the phase bit $\varphi_2$ and store it in the classical bit $c_0$.
###Code
x_measurement(qc_S, q[0], c[0])
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd step)Now we build the quantum circuit for the other remaining steps, in this example, only the second one.In these steps we have 4 sub-steps: the 3 sub-steps as in the first step and, in the middle, the additional step of the phase correction- initialization with reset- phase correction- application of the Control-$U$ gates- measure of the auxiliary qubit in x-basis Initialization with resetAs we want to perform an iterative algorithm in the same circuit, we need to reset the auxiliary qubit $q0$ after the measument gate and initialize it again as before to recycle the qubit.
###Code
qc_S.reset(0)
qc_S.h(0)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)As seen in the theory, in order to extract the phase bit $\varphi_{1}$, we perform a phase correction of $-\pi\varphi_2/2$.Of course, we need to apply the phase correction in the circuit only if the phase bit $\varphi_2=1$, i.e. we have to apply the phase correction of $-\pi/2$ only if the classical bit $c_0$ is 1.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_2$) using the `c_if` method.So as we saw in the first part of this tutorial, we have to use the `c_if` method with a value of 1, as $1_{10} = 001_{2}$ (the subscripts $_{10}$ and $_2$ indicate the decimal and binary representations).
###Code
qc_S.p(-pi/2,0).c_if(c,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Control-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=1$. So we apply $\text{CP}(\pi/2)$ once. And then we perform the x-measurment of the qubit $q_0$, storing the result, the phase bit $\varphi_1$, in the bit $c_1$ of classical register.
###Code
## 2^t c-U operations (with t=m-2)
for _ in range(2**(m-2)):
qc_S.cp(pi/2,0,1)
x_measurement(qc_S, q[0], c[1])
###Output
_____no_output_____
###Markdown
Et voilà, we have our final circuit
###Code
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's execute the circuit with the `qasm_simulator`, the simulator without noise that run locally.
###Code
sim = Aer.get_backend('qasm_simulator')
count0 = execute(qc_S, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
In the picture we have the same histograms but on the left we have on the x-axis the string with phase bits $\varphi_1$, $\varphi_2$ and on the right the actual phase $\varphi$ in decimal representation.As we expected we have found $\varphi=\frac{1}{4}=0.25$ with a $100\%$ probability. IPE example with a 2-qubit gateNow, we want to apply the IPE algorithm to estimate the phase for a 2-qubit gate $U$. For this example, let's consider the controlled version of the $T$ gate, i.e. the gate $U=\textrm{Controlled-}T$ (that from now we will express more complactly with $CT$). Its matrix is$$ CT = \begin{bmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & e^\frac{i\pi}{4}\\ \end{bmatrix} $$That is, the $CT$ gate adds a phase $\pi/4$ to the state $|11\rangle$, leaving unchanged the phase of the other computational basis states $|00\rangle$, $|01\rangle$, $|10\rangle$.Let's consider to estimate the phase $\phi=\pi/4$ for the eigenstate $|11\rangle$, we should find $\varphi=1/8$, since $\phi = 2 \pi \varphi$. Therefore to estimate the phase we neeed exactly 3 classical bits, i.e. $m=3$, since $1/2^3=1/8$. So $\varphi=0.\varphi_1\varphi_2\varphi_3$.As done with the example for the 1-qubit $U$ operator we will go through the same steps but this time we will have $3$ steps since $m=3$, and we will not repeat all the explanations. So for details see the above example for 1-qubit $U$ gate.First, we initialize the circuit with 3 qubits, 1 for the auxiliary qubit and 2 for the 2-qubit gate, and 3 classical bits to store the phase bits $\varphi_1$, $\varphi_2$, $\varphi_3$.
###Code
nq = 3 # number of qubits
m = 3 # number of classical bits
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, to estimate the least significant phase bit $\varphi_m=\varphi_3$. InitializationWe inizialize the auxiliary qubit and the other qubits with the eigenstate $|11\rangle$.
###Code
qc.h(0)
qc.x([1,2])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply multiple times the $CU$ operator, that, in this example, is the Controlled-$CT$ gate ($CCT$ for short).To implement $CCT$ in the circuit, since $T$ is a phase gate, we can use the multi-controlled phase gate $\text{MCP}(\theta)$, with $\theta=\pi/4$.
###Code
cu_circ = QuantumCircuit(nq)
cu_circ.mcp(pi/4,[0,1],2)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{MCP}(\pi/4)$. Since for the first step $t=m-1$ and $m=3$, we have $2^t=4$.
###Code
for _ in range(2**(m-1)):
qc.mcp(pi/4,[0,1],2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurenment of the auxiliary qubit in x-basis.We can use the `x_measurement` function defined above in the example for 1-qubit gate. In this way we have obtained the phase bit $\varphi_3$ and stored it in the classical bit $c_0$.
###Code
x_measurement(qc, q[0], c[0])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd, 3rd)Now we build the quantum circuit for the other remaining steps, the second and the third ones.As said in the first example, in these steps we have the additional sub-step of the phase correction. Initialization with reset
###Code
qc.reset(0)
qc.h(0)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)In order to extract the phase bit $\varphi_{2}$, we perform a phase correction of $-\pi\varphi_3/2$.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_3$).
###Code
qc.p(-pi/2,0).c_if(c,1)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Control-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=2$. So we apply $\text{MCP}(\pi/4)$ $2$ times. And then we perform the x-measurment of the qubit $q_0$, storing the phase bit $\varphi_2$ in the bit $c_1$.
###Code
for _ in range(2**(m-2)):
qc.mcp(pi/4,[0,1],2)
x_measurement(qc, q[0], c[1])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
All substeps of the 3rd stepFor the 3rd and last step, we perform the reset and initialization of the auxiliary qubit as done in the second step.Then at the 3rd step we have to perform the phase correction of $-2\pi 0.0\varphi_{2}\varphi_{3}= -2\pi \left(\frac{\varphi_2}{4}+\frac{\varphi_3}{8}\right)=-\frac{\varphi_2\pi}{2}-\frac{ \varphi_3\pi}{4}$, thus we have to apply 2 conditioned phase corrections, one conditioned by $\varphi_3$ ($=c_0$) and the other by $\varphi_2$($=c_1$). To do this we have to apply the following:- gate $P(-\pi/4)$ conditioned by $c_0=1$, that is, by $c=001$ (c_if with value 1)- gate $P(-\pi/2)$ conditioned by $c_1=1$, that is, the gate is applied when $c=010$ (c_if with values $2$)- gate $P(-3\pi/4)$ conditioned by $c_1=1$ and $c_0=1$ that is, the gate is applied when $c=011$ (c_if with values $3$)Next, the $CU$ operations: we apply $2^t$ times the $\text{MCP}(\pi/4)$ gate and since at the 3rd step $t=m-3=0$, we apply the gate only once.
###Code
# initialization of qubit q0
qc.reset(0)
qc.h(0)
# phase correction
qc.p(-pi/4,0).c_if(c,1)
qc.p(-pi/2,0).c_if(c,2)
qc.p(-3*pi/2,0).c_if(c,3)
# c-U operations
for _ in range(2**(m-3)):
qc.mcp(pi/4,[0,1],2)
# X measurement
qc.h(0)
qc.measure(0,2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Now, we execute the circuit with the simulator without noise.
###Code
count0 = execute(qc, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
fig.tight_layout()
###Output
_____no_output_____
###Markdown
We have obtained $100\%$ probability to find $\varphi=0.125$, that is, $1/8$, as expected.
###Code
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
/opt/miniconda3/envs/qiskit/lib/python3.9/site-packages/qiskit/aqua/operators/operator_globals.py:48: DeprecationWarning: `from_label` is deprecated and will be removed no earlier than 3 months after the release date. Use Pauli(label) instead.
X = make_immutable(PrimitiveOp(Pauli.from_label('X')))
###Markdown
Iterative Quantum Phase Estimation AlgorithmThe goal of this tutorial is to understand how the Iterative Phase Estimation (IPE) algorithm works, why would we use the IPE algorithm instead of the QPE (Quantum Phase Estimation) algorithm and how to build it with Qiskit using the same circuit exploiting reset gate and the `c_if` method that allows to apply gates conditioned by the values stored in a classical register, resulting from previous measurements.**References**- [Section 2 of Lab 4: Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm) - [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html)
###Code
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute, assemble, Aer
from qiskit.tools.visualization import plot_histogram
from math import pi
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Conditined gates: the c_if method Before starting the IPE algorithm, we will give a brief tutorial about the Qiskit conditional method, c_if, as it goes into building the IPE circuit.`c_if` is a function (actually a method of the gate class) to perform conditioned operations based on the value stored previously in a classical register. With this feature you can apply gates after a measurement in the same circuit conditioned by the measurement outcome.For example, the following code will execute the $X$ gate if the value of the classical register is $0$.
###Code
q = QuantumRegister(1,'q')
c = ClassicalRegister(1,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.measure(0,0)
qc.x(0).c_if(c, 0)
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We highlight that the method c_if expects as the first argument a whole classical register, not a single classical bit (or a list of classical bits), and as the second argument a value in decimal representation (a non-negative integer), not the value of a single bit, 0, or 1 (or a list/string of binary digits).Let's make another example. Consider that we want to perform a bit flip on the third qubit after the measurements in the following circuit, when the results of the measurement of $q_0$ and $q_1$ are both $1$.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.h(q[1])
qc.h(q[2])
qc.barrier()
qc.measure(q,c)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
We want to apply the $X$ gate, only if both the results of the measurement of $q_0$ and $q_1$ are $1$. We can do this using the c_if method, conditioning the application of $X$ depending on the value passed as argument to c_if.We will have to encode the value to pass to the c_if method such that it will check the values 011 and 111 (in binary representation), since it does not matter what is in the leftmost position.The 2 integer values in decimal representation: We can check the solutions using the bin() method in python (the prefix `0b` indicates the binary format).
###Code
print(bin(3))
print(bin(7))
###Output
0b11
0b111
###Markdown
So we have to apply $X$ to $q_2$ using c_if two times, one for each value corresponding to 011 and 111.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.h(1)
qc.h(2)
qc.barrier()
qc.measure(q,c)
qc.x(2).c_if(c, 3) # for the 011 case
qc.x(2).c_if(c, 7) # for the 111 case
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
IPEThe motivation for using the IPE algorithm is that QPE algorithm works fine for short depth circuits but when the circuit starts to grow, it doesn't work properly due to gate noise and decoherence times.The detailed explanation of how the algorithm works can be found in [Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm). To understand QPE in depth, you can see also [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html). IPE example with a 1-qubit gate for $U$We want to apply the IPE algorithm to estimate the phase for a 1-qubit operator $U$. For example, here we use the $S$-gate.Let's apply the IPE algorithm to estimate the phase for $S$-gate.Its matrix is $$ S = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{2}\\ \end{bmatrix}$$That is, the $S$-gate adds a phase $\pi/2$ to the state $|1\rangle$, leaving unchanged the phase of the state $|0\rangle$$$ S|1\rangle = e^\frac{i\pi}{2}|1\rangle $$In the following, we will use the notation and terms used in [Section 2 of lab 4](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm).Let's consider to estimate the phase $\phi=\frac{\pi}{2}$ for the eigenstate $|1\rangle$, we should find $\varphi=\frac{1}{4}$ (where $\phi = 2 \pi \varphi$). Therefore to estimate the phase we need exactly 2 phase bits, i.e. $m=2$, since $1/2^2=1/4$. So $\varphi=0.\varphi_1\varphi_2$.Remember from the theory that for the IPE algorithm, $m$ is also the number of iterations, so we need only $2$ iterations or steps.First, we initialize the circuit. IPE works with only 1 auxiliary qubit, instead of $m$ counting qubits of the QPE algorithm. Therefore, we need 2 qubits, 1 auxiliary qubit and 1 for the eigenstate of $U$-gate, and a classical register of 2 bits, for the phase bits $\varphi_1$, $\varphi_2$.
###Code
nq = 2
m = 2
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc_S = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, that is, the first iteration of the algorithm, to estimate the least significant phase bit $\varphi_m$, in this case $\varphi_2$. For the first step we have 3 sub-steps:- initialization- application of the Controlled-$U$ gates- measure of the auxiliary qubit in x-basis InitializationThe initialization consists of application the Hadamard gate to the auxiliary qubit and the preparation of the eigenstate $|1\rangle$.
###Code
qc_S.h(0)
qc_S.x(1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply $2^t$ times the Controlled-$U$ operators (see also in the docs [Two qubit gates](https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.htmlTwo-qubit-gates)), that, in this example, is the Controlled-$S$ gate ($CS$ for short).To implement $CS$ in the circuit, since $S$ is a phase gate, we can use the controlled phase gate $\text{CP}(\theta)$, with $\theta=\pi/2$.
###Code
cu_circ = QuantumCircuit(2)
cu_circ.cp(pi/2,0,1)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{CP}(\pi/2)$. Since for the first step $t=m-1$, and $m=2$, we have $2^t=2$.
###Code
for _ in range(2**(m-1)):
qc_S.cp(pi/2,0,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurenment of the auxiliary qubit in x-basis. So we will define a function to perform the x_measure and then apply it.
###Code
def x_measurement(qc, qubit, cbit):
"""Measure 'qubit' in the X-basis, and store the result in 'cbit'"""
qc.h(qubit)
qc.measure(qubit, cbit)
###Output
_____no_output_____
###Markdown
In this way we obtain the phase bit $\varphi_2$ and store it in the classical bit $c_0$.
###Code
x_measurement(qc_S, q[0], c[0])
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd step)Now we build the quantum circuit for the other remaining steps, in this example, only the second one.In these steps we have 4 sub-steps: the 3 sub-steps as in the first step and, in the middle, the additional step of the phase correction- initialization with reset- phase correction- application of the Control-$U$ gates- measure of the auxiliary qubit in x-basis Initialization with resetAs we want to perform an iterative algorithm in the same circuit, we need to reset the auxiliary qubit $q0$ after the measument gate and initialize it again as before to recycle the qubit.
###Code
qc_S.reset(0)
qc_S.h(0)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)As seen in the theory, in order to extract the phase bit $\varphi_{1}$, we perform a phase correction of $-\pi\varphi_2/2$.Of course, we need to apply the phase correction in the circuit only if the phase bit $\varphi_2=1$, i.e. we have to apply the phase correction of $-\pi/2$ only if the classical bit $c_0$ is 1.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_2$) using the `c_if` method.So as we saw in the first part of this tutorial, we have to use the `c_if` method with a value of 1, as $1_{10} = 001_{2}$ (the subscripts $_{10}$ and $_2$ indicate the decimal and binary representations).
###Code
qc_S.p(-pi/2,0).c_if(c,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Control-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=1$. So we apply $\text{CP}(\pi/2)$ once. And then we perform the x-measurment of the qubit $q_0$, storing the result, the phase bit $\varphi_1$, in the bit $c_1$ of classical register.
###Code
## 2^t c-U operations (with t=m-2)
for _ in range(2**(m-2)):
qc_S.cp(pi/2,0,1)
x_measurement(qc_S, q[0], c[1])
###Output
_____no_output_____
###Markdown
Et voilà, we have our final circuit
###Code
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's execute the circuit with the `qasm_simulator`, the simulator without noise that run locally.
###Code
sim = Aer.get_backend('qasm_simulator')
count0 = execute(qc_S, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
In the picture we have the same histograms but on the left we have on the x-axis the string with phase bits $\varphi_1$, $\varphi_2$ and on the right the actual phase $\varphi$ in decimal representation.As we expected we have found $\varphi=\frac{1}{4}=0.25$ with a $100\%$ probability. IPE example with a 2-qubit gateNow, we want to apply the IPE algorithm to estimate the phase for a 2-qubit gate $U$. For this example, let's consider the controlled version of the $T$ gate, i.e. the gate $U=\textrm{Controlled-}T$ (that from now we will express more complactly with $CT$). Its matrix is$$ CT = \begin{bmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & e^\frac{i\pi}{4}\\ \end{bmatrix} $$That is, the $CT$ gate adds a phase $\pi/4$ to the state $|11\rangle$, leaving unchanged the phase of the other computational basis states $|00\rangle$, $|01\rangle$, $|10\rangle$.Let's consider to estimate the phase $\phi=\pi/4$ for the eigenstate $|11\rangle$, we should find $\varphi=1/8$, since $\phi = 2 \pi \varphi$. Therefore to estimate the phase we neeed exactly 3 classical bits, i.e. $m=3$, since $1/2^3=1/8$. So $\varphi=0.\varphi_1\varphi_2\varphi_3$.As done with the example for the 1-qubit $U$ operator we will go through the same steps but this time we will have $3$ steps since $m=3$, and we will not repeat all the explanations. So for details see the above example for 1-qubit $U$ gate.First, we initialize the circuit with 3 qubits, 1 for the auxiliary qubit and 2 for the 2-qubit gate, and 3 classical bits to store the phase bits $\varphi_1$, $\varphi_2$, $\varphi_3$.
###Code
nq = 3 # number of qubits
m = 3 # number of classical bits
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, to estimate the least significant phase bit $\varphi_m=\varphi_3$. InitializationWe inizialize the auxiliary qubit and the other qubits with the eigenstate $|11\rangle$.
###Code
qc.h(0)
qc.x([1,2])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply multiple times the $CU$ operator, that, in this example, is the Controlled-$CT$ gate ($CCT$ for short).To implement $CCT$ in the circuit, since $T$ is a phase gate, we can use the multi-controlled phase gate $\text{MCP}(\theta)$, with $\theta=\pi/4$.
###Code
cu_circ = QuantumCircuit(nq)
cu_circ.mcp(pi/4,[0,1],2)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{MCP}(\pi/4)$. Since for the first step $t=m-1$ and $m=3$, we have $2^t=4$.
###Code
for _ in range(2**(m-1)):
qc.mcp(pi/4,[0,1],2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurenment of the auxiliary qubit in x-basis.We can use the `x_measurement` function defined above in the example for 1-qubit gate. In this way we have obtained the phase bit $\varphi_3$ and stored it in the classical bit $c_0$.
###Code
x_measurement(qc, q[0], c[0])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd, 3rd)Now we build the quantum circuit for the other remaining steps, the second and the third ones.As said in the first example, in these steps we have the additional sub-step of the phase correction. Initialization with reset
###Code
qc.reset(0)
qc.h(0)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)In order to extract the phase bit $\varphi_{2}$, we perform a phase correction of $-\pi\varphi_3/2$.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_3$).
###Code
qc.p(-pi/2,0).c_if(c,1)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Control-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=2$. So we apply $\text{MCP}(\pi/4)$ $2$ times. And then we perform the x-measurment of the qubit $q_0$, storing the phase bit $\varphi_2$ in the bit $c_1$.
###Code
for _ in range(2**(m-2)):
qc.mcp(pi/4,[0,1],2)
x_measurement(qc, q[0], c[1])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
All substeps of the 3rd stepFor the 3rd and last step, we perform the reset and initialization of the auxiliary qubit as done in the second step.Then at the 3rd step we have to perform the phase correction of $-2\pi 0.0\varphi_{2}\varphi_{3}= -2\pi \left(\frac{\varphi_2}{4}+\frac{\varphi_3}{8}\right)=-\frac{\varphi_2\pi}{2}-\frac{ \varphi_3\pi}{4}$, thus we have to apply 2 conditioned phase corrections, one conditioned by $\varphi_3$ ($=c_0$) and the other by $\varphi_2$($=c_1$). To do this we have to apply the following:- gate $P(-\pi/4)$ conditioned by $c_0=1$, that is, by $c=001$ (c_if with value 1)- gate $P(-\pi/2)$ conditioned by $c_1=1$, that is, the gate is applied when $c=010$ (c_if with values $2$)- gate $P(-3\pi/4)$ conditioned by $c_1=1$ and $c_0=1$ that is, the gate is applied when $c=011$ (c_if with values $3$)Next, the $CU$ operations: we apply $2^t$ times the $\text{MCP}(\pi/4)$ gate and since at the 3rd step $t=m-3=0$, we apply the gate only once.
###Code
# initialization of qubit q0
qc.reset(0)
qc.h(0)
# phase correction
qc.p(-pi/4,0).c_if(c,1)
qc.p(-pi/2,0).c_if(c,2)
qc.p(-3*pi/2,0).c_if(c,3)
# c-U operations
for _ in range(2**(m-3)):
qc.mcp(pi/4,[0,1],2)
# X measurement
qc.h(0)
qc.measure(0,2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Now, we execute the circuit with the simulator without noise.
###Code
count0 = execute(qc, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
fig.tight_layout()
###Output
_____no_output_____
###Markdown
We have obtained $100\%$ probability to find $\varphi=0.125$, that is, $1/8$, as expected.
###Code
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
/opt/miniconda3/envs/qiskit/lib/python3.9/site-packages/qiskit/aqua/operators/operator_globals.py:48: DeprecationWarning: `from_label` is deprecated and will be removed no earlier than 3 months after the release date. Use Pauli(label) instead.
X = make_immutable(PrimitiveOp(Pauli.from_label('X')))
###Markdown
Iterative Quantum Phase Estimation AlgorithmThe goal of this tutorial is to understand how the Iterative Phase Estimation (IPE) algorithm works, why would we use the IPE algorithm instead of the QPE (Quantum Phase Estimation) algorithm and how to build it with Qiskit using the same circuit exploiting reset gate and the `c_if` method that allows to apply gates conditioned by the values stored in a classical register, resulting from previous measurements.**References**- [Section 2 of Lab 4: Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm) - [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html)
###Code
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute, assemble, Aer
from qiskit.tools.visualization import plot_histogram
from math import pi
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Conditioned gates: the c_if method Before starting the IPE algorithm, we will give a brief tutorial about the Qiskit conditional method, c_if, as it goes into building the IPE circuit.`c_if` is a function (actually a method of the gate class) to perform conditioned operations based on the value stored previously in a classical register. With this feature you can apply gates after a measurement in the same circuit conditioned by the measurement outcome.For example, the following code will execute the $X$ gate if the value of the classical register is $0$.
###Code
q = QuantumRegister(1,'q')
c = ClassicalRegister(1,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.measure(0,0)
qc.x(0).c_if(c, 0)
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We highlight that the method c_if expects as the first argument a whole classical register, not a single classical bit (or a list of classical bits), and as the second argument a value in decimal representation (a non-negative integer), not the value of a single bit, 0, or 1 (or a list/string of binary digits).Let's make another example. Consider that we want to perform a bit flip on the third qubit after the measurements in the following circuit, when the results of the measurement of $q_0$ and $q_1$ are both $1$.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.h(q[1])
qc.h(q[2])
qc.barrier()
qc.measure(q,c)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
We want to apply the $X$ gate, only if both the results of the measurement of $q_1$ and $q_2$ are $1$. We can do this using the c_if method, conditioning the application of $X$ depending on the value passed as argument to c_if.We will have to encode the value to pass to the c_if method such that it will check the values 011 and 111 (in binary representation), since it does not matter what is in the rightmost position.The 2 integer values in decimal representation: We can check the solutions using the bin() method in python (the prefix `0b` indicates the binary format).
###Code
print(bin(3))
print(bin(7))
###Output
0b11
0b111
###Markdown
So we have to apply $X$ to $q_2$ using c_if two times, one for each value corresponding to 011 and 111.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.h(1)
qc.h(2)
qc.barrier()
qc.measure(q,c)
qc.x(2).c_if(c, 3) # for the 011 case
qc.x(2).c_if(c, 7) # for the 111 case
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
IPEThe motivation for using the IPE algorithm is that QPE algorithm works fine for short depth circuits but when the circuit starts to grow, it doesn't work properly due to gate noise and decoherence times.The detailed explanation of how the algorithm works can be found in [Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm). To understand QPE in depth, you can see also [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html). IPE example with a 1-qubit gate for $U$We want to apply the IPE algorithm to estimate the phase for a 1-qubit operator $U$. For example, here we use the $S$-gate.Let's apply the IPE algorithm to estimate the phase for $S$-gate.Its matrix is $$ S = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{2}\\ \end{bmatrix}$$That is, the $S$-gate adds a phase $\pi/2$ to the state $|1\rangle$, leaving unchanged the phase of the state $|0\rangle$$$ S|1\rangle = e^\frac{i\pi}{2}|1\rangle $$In the following, we will use the notation and terms used in [Section 2 of lab 4](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm).Let's consider to estimate the phase $\phi=\frac{\pi}{2}$ for the eigenstate $|1\rangle$, we should find $\varphi=\frac{1}{4}$ (where $\phi = 2 \pi \varphi$). Therefore to estimate the phase we need exactly 2 phase bits, i.e. $m=2$, since $1/2^2=1/4$. So $\varphi=0.\varphi_1\varphi_2$.Remember from the theory that for the IPE algorithm, $m$ is also the number of iterations, so we need only $2$ iterations or steps.First, we initialize the circuit. IPE works with only 1 auxiliary qubit, instead of $m$ counting qubits of the QPE algorithm. Therefore, we need 2 qubits, 1 auxiliary qubit and 1 for the eigenstate of $U$-gate, and a classical register of 2 bits, for the phase bits $\varphi_1$, $\varphi_2$.
###Code
nq = 2
m = 2
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc_S = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, that is, the first iteration of the algorithm, to estimate the least significant phase bit $\varphi_m$, in this case $\varphi_2$. For the first step we have 3 sub-steps:- initialization- application of the Controlled-$U$ gates- measure of the auxiliary qubit in x-basis InitializationThe initialization consists of application the Hadamard gate to the auxiliary qubit and the preparation of the eigenstate $|1\rangle$.
###Code
qc_S.h(0)
qc_S.x(1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply $2^t$ times the Controlled-$U$ operators (see also in the docs [Two qubit gates](https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.htmlTwo-qubit-gates)), that, in this example, is the Controlled-$S$ gate ($CS$ for short).To implement $CS$ in the circuit, since $S$ is a phase gate, we can use the controlled phase gate $\text{CP}(\theta)$, with $\theta=\pi/2$.
###Code
cu_circ = QuantumCircuit(2)
cu_circ.cp(pi/2,0,1)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{CP}(\pi/2)$. Since for the first step $t=m-1$, and $m=2$, we have $2^t=2$.
###Code
for _ in range(2**(m-1)):
qc_S.cp(pi/2,0,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurement of the auxiliary qubit in x-basis. So we will define a function to perform the x_measure and then apply it.
###Code
def x_measurement(qc, qubit, cbit):
"""Measure 'qubit' in the X-basis, and store the result in 'cbit'"""
qc.h(qubit)
qc.measure(qubit, cbit)
###Output
_____no_output_____
###Markdown
In this way we obtain the phase bit $\varphi_2$ and store it in the classical bit $c_0$.
###Code
x_measurement(qc_S, q[0], c[0])
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd step)Now we build the quantum circuit for the other remaining steps, in this example, only the second one.In these steps we have 4 sub-steps: the 3 sub-steps as in the first step and, in the middle, the additional step of the phase correction- initialization with reset- phase correction- application of the Control-$U$ gates- measure of the auxiliary qubit in x-basis Initialization with resetAs we want to perform an iterative algorithm in the same circuit, we need to reset the auxiliary qubit $q0$ after the measument gate and initialize it again as before to recycle the qubit.
###Code
qc_S.reset(0)
qc_S.h(0)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)As seen in the theory, in order to extract the phase bit $\varphi_{1}$, we perform a phase correction of $-\pi\varphi_2/2$.Of course, we need to apply the phase correction in the circuit only if the phase bit $\varphi_2=1$, i.e. we have to apply the phase correction of $-\pi/2$ only if the classical bit $c_0$ is 1.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_2$) using the `c_if` method.So as we saw in the first part of this tutorial, we have to use the `c_if` method with a value of 1, as $1_{10} = 001_{2}$ (the subscripts $_{10}$ and $_2$ indicate the decimal and binary representations).
###Code
qc_S.p(-pi/2,0).c_if(c,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Control-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=1$. So we apply $\text{CP}(\pi/2)$ once. And then we perform the x-measurement of the qubit $q_0$, storing the result, the phase bit $\varphi_1$, in the bit $c_1$ of classical register.
###Code
## 2^t c-U operations (with t=m-2)
for _ in range(2**(m-2)):
qc_S.cp(pi/2,0,1)
x_measurement(qc_S, q[0], c[1])
###Output
_____no_output_____
###Markdown
Et voilà, we have our final circuit
###Code
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's execute the circuit with the `qasm_simulator`, the simulator without noise that run locally.
###Code
sim = Aer.get_backend('qasm_simulator')
count0 = execute(qc_S, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
In the picture we have the same histograms but on the left we have on the x-axis the string with phase bits $\varphi_1$, $\varphi_2$ and on the right the actual phase $\varphi$ in decimal representation.As we expected we have found $\varphi=\frac{1}{4}=0.25$ with a $100\%$ probability. IPE example with a 2-qubit gateNow, we want to apply the IPE algorithm to estimate the phase for a 2-qubit gate $U$. For this example, let's consider the controlled version of the $T$ gate, i.e. the gate $U=\textrm{Controlled-}T$ (that from now we will express more complactly with $CT$). Its matrix is$$ CT = \begin{bmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & e^\frac{i\pi}{4}\\ \end{bmatrix} $$That is, the $CT$ gate adds a phase $\pi/4$ to the state $|11\rangle$, leaving unchanged the phase of the other computational basis states $|00\rangle$, $|01\rangle$, $|10\rangle$.Let's consider to estimate the phase $\phi=\pi/4$ for the eigenstate $|11\rangle$, we should find $\varphi=1/8$, since $\phi = 2 \pi \varphi$. Therefore to estimate the phase we need exactly 3 classical bits, i.e. $m=3$, since $1/2^3=1/8$. So $\varphi=0.\varphi_1\varphi_2\varphi_3$.As done with the example for the 1-qubit $U$ operator we will go through the same steps but this time we will have $3$ steps since $m=3$, and we will not repeat all the explanations. So for details see the above example for 1-qubit $U$ gate.First, we initialize the circuit with 3 qubits, 1 for the auxiliary qubit and 2 for the 2-qubit gate, and 3 classical bits to store the phase bits $\varphi_1$, $\varphi_2$, $\varphi_3$.
###Code
nq = 3 # number of qubits
m = 3 # number of classical bits
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, to estimate the least significant phase bit $\varphi_m=\varphi_3$. InitializationWe initialize the auxiliary qubit and the other qubits with the eigenstate $|11\rangle$.
###Code
qc.h(0)
qc.x([1,2])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply multiple times the $CU$ operator, that, in this example, is the Controlled-$CT$ gate ($CCT$ for short).To implement $CCT$ in the circuit, since $T$ is a phase gate, we can use the multi-controlled phase gate $\text{MCP}(\theta)$, with $\theta=\pi/4$.
###Code
cu_circ = QuantumCircuit(nq)
cu_circ.mcp(pi/4,[0,1],2)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{MCP}(\pi/4)$. Since for the first step $t=m-1$ and $m=3$, we have $2^t=4$.
###Code
for _ in range(2**(m-1)):
qc.mcp(pi/4,[0,1],2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurement of the auxiliary qubit in x-basis.We can use the `x_measurement` function defined above in the example for 1-qubit gate. In this way we have obtained the phase bit $\varphi_3$ and stored it in the classical bit $c_0$.
###Code
x_measurement(qc, q[0], c[0])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd, 3rd)Now we build the quantum circuit for the other remaining steps, the second and the third ones.As said in the first example, in these steps we have the additional sub-step of the phase correction. Initialization with reset
###Code
qc.reset(0)
qc.h(0)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)In order to extract the phase bit $\varphi_{2}$, we perform a phase correction of $-\pi\varphi_3/2$.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_3$).
###Code
qc.p(-pi/2,0).c_if(c,1)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Control-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=2$. So we apply $\text{MCP}(\pi/4)$ $2$ times. And then we perform the x-measurement of the qubit $q_0$, storing the phase bit $\varphi_2$ in the bit $c_1$.
###Code
for _ in range(2**(m-2)):
qc.mcp(pi/4,[0,1],2)
x_measurement(qc, q[0], c[1])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
All substeps of the 3rd stepFor the 3rd and last step, we perform the reset and initialization of the auxiliary qubit as done in the second step.Then at the 3rd step we have to perform the phase correction of $-2\pi 0.0\varphi_{2}\varphi_{3}= -2\pi \left(\frac{\varphi_2}{4}+\frac{\varphi_3}{8}\right)=-\frac{\varphi_2\pi}{2}-\frac{ \varphi_3\pi}{4}$, thus we have to apply 2 conditioned phase corrections, one conditioned by $\varphi_3$ ($=c_0$) and the other by $\varphi_2$($=c_1$). To do this we have to apply the following:- gate $P(-\pi/4)$ conditioned by $c_0=1$, that is, by $c=001$ (c_if with value $1$)- gate $P(-\pi/2)$ conditioned by $c_1=1$, that is, the gate is applied when $c=010$ (c_if with values $2$)- gate $P(-3\pi/4)$ conditioned by $c_1=1$ and $c_0=1$ that is, the gate is applied when $c=011$ (c_if with values $3$)Next, the $CU$ operations: we apply $2^t$ times the $\text{MCP}(\pi/4)$ gate and since at the 3rd step $t=m-3=0$, we apply the gate only once.
###Code
# initialization of qubit q0
qc.reset(0)
qc.h(0)
# phase correction
qc.p(-pi/4,0).c_if(c,1)
qc.p(-pi/2,0).c_if(c,2)
qc.p(-3*pi/2,0).c_if(c,3)
# c-U operations
for _ in range(2**(m-3)):
qc.mcp(pi/4,[0,1],2)
# X measurement
qc.h(0)
qc.measure(0,2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Now, we execute the circuit with the simulator without noise.
###Code
count0 = execute(qc, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
fig.tight_layout()
###Output
_____no_output_____
###Markdown
We have obtained $100\%$ probability to find $\varphi=0.125$, that is, $1/8$, as expected.
###Code
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
/opt/miniconda3/envs/qiskit/lib/python3.9/site-packages/qiskit/aqua/operators/operator_globals.py:48: DeprecationWarning: `from_label` is deprecated and will be removed no earlier than 3 months after the release date. Use Pauli(label) instead.
X = make_immutable(PrimitiveOp(Pauli.from_label('X')))
###Markdown
Iterative Quantum Phase Estimation AlgorithmThe goal of this tutorial is to understand how the Iterative Phase Estimation (IPE) algorithm works, why would we use the IPE algorithm instead of the QPE (Quantum Phase Estimation) algorithm and how to build it with Qiskit using the same circuit exploiting reset gate and the `c_if` method that allows to apply gates conditioned by the values stored in a classical register, resulting from previous measurements.**References**- [Section 2 of Lab 4: Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm) - [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html)
###Code
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute, assemble, Aer
from qiskit.tools.visualization import plot_histogram
from math import pi
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Conditioned gates: the c_if method Before starting the IPE algorithm, we will give a brief tutorial about the Qiskit conditional method, c_if, as it goes into building the IPE circuit.`c_if` is a function (actually a method of the gate class) to perform conditioned operations based on the value stored previously in a classical register. With this feature you can apply gates after a measurement in the same circuit conditioned by the measurement outcome.For example, the following code will execute the $X$ gate if the value of the classical register is $0$.
###Code
q = QuantumRegister(1,'q')
c = ClassicalRegister(1,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.measure(0,0)
qc.x(0).c_if(c, 0)
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We highlight that the method c_if expects as the first argument a whole classical register, not a single classical bit (or a list of classical bits), and as the second argument a value in decimal representation (a non-negative integer), not the value of a single bit, 0, or 1 (or a list/string of binary digits).Let's make another example. Consider that we want to perform a bit flip on the third qubit after the measurements in the following circuit, when the results of the measurement of $q_0$ and $q_1$ are both $1$.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.h(q[1])
qc.h(q[2])
qc.barrier()
qc.measure(q,c)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
We want to apply the $X$ gate, only if both the results of the measurement of $q_0$ and $q_1$ are $1$. We can do this using the c_if method, conditioning the application of $X$ depending on the value passed as argument to c_if.We will have to encode the value to pass to the c_if method such that it will check the values 011 and 111 (in binary representation), since it does not matter what $q_2$ is measured as.The 2 integer values in decimal representation: We can check the solutions using the bin() method in python (the prefix `0b` indicates the binary format).
###Code
print(bin(3))
print(bin(7))
###Output
0b11
0b111
###Markdown
So we have to apply $X$ to $q_2$ using c_if two times, one for each value corresponding to 011 and 111.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.h(1)
qc.h(2)
qc.barrier()
qc.measure(q,c)
qc.x(2).c_if(c, 3) # for the 011 case
qc.x(2).c_if(c, 7) # for the 111 case
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
IPEThe motivation for using the IPE algorithm is that QPE algorithm works fine for short depth circuits but when the circuit starts to grow, it doesn't work properly due to gate noise and decoherence times.The detailed explanation of how the algorithm works can be found in [Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm). To understand QPE in depth, you can see also [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html). IPE example with a 1-qubit gate for $U$We want to apply the IPE algorithm to estimate the phase for a 1-qubit operator $U$. For example, here we use the $S$-gate.Let's apply the IPE algorithm to estimate the phase for $S$-gate.Its matrix is $$ S = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{2}\\ \end{bmatrix}$$That is, the $S$-gate adds a phase $\pi/2$ to the state $|1\rangle$, leaving unchanged the phase of the state $|0\rangle$$$ S|1\rangle = e^\frac{i\pi}{2}|1\rangle $$In the following, we will use the notation and terms used in [Section 2 of lab 4](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm).Let's consider to estimate the phase $\phi=\frac{\pi}{2}$ for the eigenstate $|1\rangle$, we should find $\varphi=\frac{1}{4}$ (where $\phi = 2 \pi \varphi$). Therefore to estimate the phase we need exactly 2 phase bits, i.e. $m=2$, since $1/2^2=1/4$. So $\varphi=0.\varphi_1\varphi_2$.Remember from the theory that for the IPE algorithm, $m$ is also the number of iterations, so we need only $2$ iterations or steps.First, we initialize the circuit. IPE works with only 1 auxiliary qubit, instead of $m$ counting qubits of the QPE algorithm. Therefore, we need 2 qubits, 1 auxiliary qubit and 1 for the eigenstate of $U$-gate, and a classical register of 2 bits, for the phase bits $\varphi_1$, $\varphi_2$.
###Code
nq = 2
m = 2
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc_S = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, that is, the first iteration of the algorithm, to estimate the least significant phase bit $\varphi_m$, in this case $\varphi_2$. For the first step we have 3 sub-steps:- initialization- application of the Controlled-$U$ gates- measure of the auxiliary qubit in x-basis InitializationThe initialization consists of application the Hadamard gate to the auxiliary qubit and the preparation of the eigenstate $|1\rangle$.
###Code
qc_S.h(0)
qc_S.x(1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply $2^t$ times the Controlled-$U$ operators (see also in the docs [Two qubit gates](https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.htmlTwo-qubit-gates)), that, in this example, is the Controlled-$S$ gate ($CS$ for short).To implement $CS$ in the circuit, since $S$ is a phase gate, we can use the controlled phase gate $\text{CP}(\theta)$, with $\theta=\pi/2$.
###Code
cu_circ = QuantumCircuit(2)
cu_circ.cp(pi/2,0,1)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{CP}(\pi/2)$. Since for the first step $t=m-1$, and $m=2$, we have $2^t=2$.
###Code
for _ in range(2**(m-1)):
qc_S.cp(pi/2,0,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurement of the auxiliary qubit in x-basis. So we will define a function to perform the x_measure and then apply it.
###Code
def x_measurement(qc, qubit, cbit):
"""Measure 'qubit' in the X-basis, and store the result in 'cbit'"""
qc.h(qubit)
qc.measure(qubit, cbit)
###Output
_____no_output_____
###Markdown
In this way we obtain the phase bit $\varphi_2$ and store it in the classical bit $c_0$.
###Code
x_measurement(qc_S, q[0], c[0])
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd step)Now we build the quantum circuit for the other remaining steps, in this example, only the second one.In these steps we have 4 sub-steps: the 3 sub-steps as in the first step and, in the middle, the additional step of the phase correction- initialization with reset- phase correction- application of the Controlled-$U$ gates- measure of the auxiliary qubit in x-basis Initialization with resetAs we want to perform an iterative algorithm in the same circuit, we need to reset the auxiliary qubit $q0$ after the measument gate and initialize it again as before to recycle the qubit.
###Code
qc_S.reset(0)
qc_S.h(0)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)As seen in the theory, in order to extract the phase bit $\varphi_{1}$, we perform a phase correction of $-\pi\varphi_2/2$.Of course, we need to apply the phase correction in the circuit only if the phase bit $\varphi_2=1$, i.e. we have to apply the phase correction of $-\pi/2$ only if the classical bit $c_0$ is 1.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_2$) using the `c_if` method.So as we saw in the first part of this tutorial, we have to use the `c_if` method with a value of 1, as $1_{10} = 001_{2}$ (the subscripts $_{10}$ and $_2$ indicate the decimal and binary representations).
###Code
qc_S.p(-pi/2,0).c_if(c,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=1$. So we apply $\text{CP}(\pi/2)$ once. And then we perform the x-measurement of the qubit $q_0$, storing the result, the phase bit $\varphi_1$, in the bit $c_1$ of classical register.
###Code
## 2^t c-U operations (with t=m-2)
for _ in range(2**(m-2)):
qc_S.cp(pi/2,0,1)
x_measurement(qc_S, q[0], c[1])
###Output
_____no_output_____
###Markdown
Et voilà, we have our final circuit
###Code
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's execute the circuit with the `qasm_simulator`, the simulator without noise that run locally.
###Code
sim = Aer.get_backend('qasm_simulator')
count0 = execute(qc_S, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
In the picture we have the same histograms but on the left we have on the x-axis the string with phase bits $\varphi_1$, $\varphi_2$ and on the right the actual phase $\varphi$ in decimal representation.As we expected we have found $\varphi=\frac{1}{4}=0.25$ with a $100\%$ probability. IPE example with a 2-qubit gateNow, we want to apply the IPE algorithm to estimate the phase for a 2-qubit gate $U$. For this example, let's consider the controlled version of the $T$ gate, i.e. the gate $U=\textrm{Controlled-}T$ (that from now we will express more compactly with $CT$). Its matrix is$$ CT = \begin{bmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & e^\frac{i\pi}{4}\\ \end{bmatrix} $$That is, the $CT$ gate adds a phase $\pi/4$ to the state $|11\rangle$, leaving unchanged the phase of the other computational basis states $|00\rangle$, $|01\rangle$, $|10\rangle$.Let's consider to estimate the phase $\phi=\pi/4$ for the eigenstate $|11\rangle$, we should find $\varphi=1/8$, since $\phi = 2 \pi \varphi$. Therefore to estimate the phase we need exactly 3 classical bits, i.e. $m=3$, since $1/2^3=1/8$. So $\varphi=0.\varphi_1\varphi_2\varphi_3$.As done with the example for the 1-qubit $U$ operator we will go through the same steps but this time we will have $3$ steps since $m=3$, and we will not repeat all the explanations. So for details see the above example for 1-qubit $U$ gate.First, we initialize the circuit with 3 qubits, 1 for the auxiliary qubit and 2 for the 2-qubit gate, and 3 classical bits to store the phase bits $\varphi_1$, $\varphi_2$, $\varphi_3$.
###Code
nq = 3 # number of qubits
m = 3 # number of classical bits
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, to estimate the least significant phase bit $\varphi_m=\varphi_3$. InitializationWe initialize the auxiliary qubit and the other qubits with the eigenstate $|11\rangle$.
###Code
qc.h(0)
qc.x([1,2])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply multiple times the $CU$ operator, that, in this example, is the Controlled-$CT$ gate ($CCT$ for short).To implement $CCT$ in the circuit, since $T$ is a phase gate, we can use the multi-controlled phase gate $\text{MCP}(\theta)$, with $\theta=\pi/4$.
###Code
cu_circ = QuantumCircuit(nq)
cu_circ.mcp(pi/4,[0,1],2)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{MCP}(\pi/4)$. Since for the first step $t=m-1$ and $m=3$, we have $2^t=4$.
###Code
for _ in range(2**(m-1)):
qc.mcp(pi/4,[0,1],2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurement of the auxiliary qubit in x-basis.We can use the `x_measurement` function defined above in the example for 1-qubit gate. In this way we have obtained the phase bit $\varphi_3$ and stored it in the classical bit $c_0$.
###Code
x_measurement(qc, q[0], c[0])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd, 3rd)Now we build the quantum circuit for the other remaining steps, the second and the third ones.As said in the first example, in these steps we have the additional sub-step of the phase correction. Initialization with reset
###Code
qc.reset(0)
qc.h(0)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)In order to extract the phase bit $\varphi_{2}$, we perform a phase correction of $-\pi\varphi_3/2$.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_3$).
###Code
qc.p(-pi/2,0).c_if(c,1)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=2$. So we apply $\text{MCP}(\pi/4)$ $2$ times. And then we perform the x-measurement of the qubit $q_0$, storing the phase bit $\varphi_2$ in the bit $c_1$.
###Code
for _ in range(2**(m-2)):
qc.mcp(pi/4,[0,1],2)
x_measurement(qc, q[0], c[1])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
All substeps of the 3rd stepFor the 3rd and last step, we perform the reset and initialization of the auxiliary qubit as done in the second step.Then at the 3rd step we have to perform the phase correction of $-2\pi 0.0\varphi_{2}\varphi_{3}= -2\pi \left(\frac{\varphi_2}{4}+\frac{\varphi_3}{8}\right)=-\frac{\varphi_2\pi}{2}-\frac{ \varphi_3\pi}{4}$, thus we have to apply 2 conditioned phase corrections, one conditioned by $\varphi_3$ ($=c_0$) and the other by $\varphi_2$($=c_1$). To do this we have to apply the following:- gate $P(-\pi/4)$ conditioned by $c_0=1$, that is, by $c=001$ (c_if with value $1$)- gate $P(-\pi/2)$ conditioned by $c_1=1$, that is, the gate is applied when $c=010$ (c_if with values $2$)- gate $P(-3\pi/4)$ conditioned by $c_1=1$ and $c_0=1$ that is, the gate is applied when $c=011$ (c_if with values $3$)Next, the $CU$ operations: we apply $2^t$ times the $\text{MCP}(\pi/4)$ gate and since at the 3rd step $t=m-3=0$, we apply the gate only once.
###Code
# initialization of qubit q0
qc.reset(0)
qc.h(0)
# phase correction
qc.p(-pi/4,0).c_if(c,1)
qc.p(-pi/2,0).c_if(c,2)
qc.p(-3*pi/4,0).c_if(c,3)
# c-U operations
for _ in range(2**(m-3)):
qc.mcp(pi/4,[0,1],2)
# X measurement
qc.h(0)
qc.measure(0,2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Now, we execute the circuit with the simulator without noise.
###Code
count0 = execute(qc, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
fig.tight_layout()
###Output
_____no_output_____
###Markdown
We have obtained $100\%$ probability to find $\varphi=0.125$, that is, $1/8$, as expected.
###Code
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
/opt/miniconda3/envs/qiskit/lib/python3.9/site-packages/qiskit/aqua/operators/operator_globals.py:48: DeprecationWarning: `from_label` is deprecated and will be removed no earlier than 3 months after the release date. Use Pauli(label) instead.
X = make_immutable(PrimitiveOp(Pauli.from_label('X')))
###Markdown
Iterative Quantum Phase Estimation AlgorithmThe goal of this tutorial is to understand how the Iterative Phase Estimation (IPE) algorithm works, why would we use the IPE algorithm instead of the QPE (Quantum Phase Estimation) algorithm and how to build it with Qiskit using the same circuit exploiting reset gate and the `c_if` method that allows to apply gates conditioned by the values stored in a classical register, resulting from previous measurements.**References**- [Section 2 of Lab 4: Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm) - [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html)
###Code
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute, assemble, Aer
from qiskit.tools.visualization import plot_histogram
from math import pi
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Conditioned gates: the c_if method Before starting the IPE algorithm, we will give a brief tutorial about the Qiskit conditional method, c_if, as it goes into building the IPE circuit.`c_if` is a function (actually a method of the gate class) to perform conditioned operations based on the value stored previously in a classical register. With this feature you can apply gates after a measurement in the same circuit conditioned by the measurement outcome.For example, the following code will execute the $X$ gate if the value of the classical register is $0$.
###Code
q = QuantumRegister(1,'q')
c = ClassicalRegister(1,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.measure(0,0)
qc.x(0).c_if(c, 0)
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We highlight that the method c_if expects as the first argument a whole classical register, not a single classical bit (or a list of classical bits), and as the second argument a value in decimal representation (a non-negative integer), not the value of a single bit, 0, or 1 (or a list/string of binary digits).Let's make another example. Consider that we want to perform a bit flip on the third qubit after the measurements in the following circuit, when the results of the measurement of $q_0$ and $q_1$ are both $1$.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.h(q[1])
qc.h(q[2])
qc.barrier()
qc.measure(q,c)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
We want to apply the $X$ gate, only if both the results of the measurement of $q_1$ and $q_2$ are $1$. We can do this using the c_if method, conditioning the application of $X$ depending on the value passed as argument to c_if.We will have to encode the value to pass to the c_if method such that it will check the values 011 and 111 (in binary representation), since it does not matter what is in the rightmost position.The 2 integer values in decimal representation: We can check the solutions using the bin() method in python (the prefix `0b` indicates the binary format).
###Code
print(bin(3))
print(bin(7))
###Output
0b11
0b111
###Markdown
So we have to apply $X$ to $q_2$ using c_if two times, one for each value corresponding to 011 and 111.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.h(1)
qc.h(2)
qc.barrier()
qc.measure(q,c)
qc.x(2).c_if(c, 3) # for the 011 case
qc.x(2).c_if(c, 7) # for the 111 case
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
IPEThe motivation for using the IPE algorithm is that QPE algorithm works fine for short depth circuits but when the circuit starts to grow, it doesn't work properly due to gate noise and decoherence times.The detailed explanation of how the algorithm works can be found in [Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm). To understand QPE in depth, you can see also [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html). IPE example with a 1-qubit gate for $U$We want to apply the IPE algorithm to estimate the phase for a 1-qubit operator $U$. For example, here we use the $S$-gate.Let's apply the IPE algorithm to estimate the phase for $S$-gate.Its matrix is $$ S = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{2}\\ \end{bmatrix}$$That is, the $S$-gate adds a phase $\pi/2$ to the state $|1\rangle$, leaving unchanged the phase of the state $|0\rangle$$$ S|1\rangle = e^\frac{i\pi}{2}|1\rangle $$In the following, we will use the notation and terms used in [Section 2 of lab 4](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm).Let's consider to estimate the phase $\phi=\frac{\pi}{2}$ for the eigenstate $|1\rangle$, we should find $\varphi=\frac{1}{4}$ (where $\phi = 2 \pi \varphi$). Therefore to estimate the phase we need exactly 2 phase bits, i.e. $m=2$, since $1/2^2=1/4$. So $\varphi=0.\varphi_1\varphi_2$.Remember from the theory that for the IPE algorithm, $m$ is also the number of iterations, so we need only $2$ iterations or steps.First, we initialize the circuit. IPE works with only 1 auxiliary qubit, instead of $m$ counting qubits of the QPE algorithm. Therefore, we need 2 qubits, 1 auxiliary qubit and 1 for the eigenstate of $U$-gate, and a classical register of 2 bits, for the phase bits $\varphi_1$, $\varphi_2$.
###Code
nq = 2
m = 2
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc_S = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, that is, the first iteration of the algorithm, to estimate the least significant phase bit $\varphi_m$, in this case $\varphi_2$. For the first step we have 3 sub-steps:- initialization- application of the Controlled-$U$ gates- measure of the auxiliary qubit in x-basis InitializationThe initialization consists of application the Hadamard gate to the auxiliary qubit and the preparation of the eigenstate $|1\rangle$.
###Code
qc_S.h(0)
qc_S.x(1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply $2^t$ times the Controlled-$U$ operators (see also in the docs [Two qubit gates](https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.htmlTwo-qubit-gates)), that, in this example, is the Controlled-$S$ gate ($CS$ for short).To implement $CS$ in the circuit, since $S$ is a phase gate, we can use the controlled phase gate $\text{CP}(\theta)$, with $\theta=\pi/2$.
###Code
cu_circ = QuantumCircuit(2)
cu_circ.cp(pi/2,0,1)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{CP}(\pi/2)$. Since for the first step $t=m-1$, and $m=2$, we have $2^t=2$.
###Code
for _ in range(2**(m-1)):
qc_S.cp(pi/2,0,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurement of the auxiliary qubit in x-basis. So we will define a function to perform the x_measure and then apply it.
###Code
def x_measurement(qc, qubit, cbit):
"""Measure 'qubit' in the X-basis, and store the result in 'cbit'"""
qc.h(qubit)
qc.measure(qubit, cbit)
###Output
_____no_output_____
###Markdown
In this way we obtain the phase bit $\varphi_2$ and store it in the classical bit $c_0$.
###Code
x_measurement(qc_S, q[0], c[0])
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd step)Now we build the quantum circuit for the other remaining steps, in this example, only the second one.In these steps we have 4 sub-steps: the 3 sub-steps as in the first step and, in the middle, the additional step of the phase correction- initialization with reset- phase correction- application of the Controlled-$U$ gates- measure of the auxiliary qubit in x-basis Initialization with resetAs we want to perform an iterative algorithm in the same circuit, we need to reset the auxiliary qubit $q0$ after the measument gate and initialize it again as before to recycle the qubit.
###Code
qc_S.reset(0)
qc_S.h(0)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)As seen in the theory, in order to extract the phase bit $\varphi_{1}$, we perform a phase correction of $-\pi\varphi_2/2$.Of course, we need to apply the phase correction in the circuit only if the phase bit $\varphi_2=1$, i.e. we have to apply the phase correction of $-\pi/2$ only if the classical bit $c_0$ is 1.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_2$) using the `c_if` method.So as we saw in the first part of this tutorial, we have to use the `c_if` method with a value of 1, as $1_{10} = 001_{2}$ (the subscripts $_{10}$ and $_2$ indicate the decimal and binary representations).
###Code
qc_S.p(-pi/2,0).c_if(c,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=1$. So we apply $\text{CP}(\pi/2)$ once. And then we perform the x-measurement of the qubit $q_0$, storing the result, the phase bit $\varphi_1$, in the bit $c_1$ of classical register.
###Code
## 2^t c-U operations (with t=m-2)
for _ in range(2**(m-2)):
qc_S.cp(pi/2,0,1)
x_measurement(qc_S, q[0], c[1])
###Output
_____no_output_____
###Markdown
Et voilà, we have our final circuit
###Code
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's execute the circuit with the `qasm_simulator`, the simulator without noise that run locally.
###Code
sim = Aer.get_backend('qasm_simulator')
count0 = execute(qc_S, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
In the picture we have the same histograms but on the left we have on the x-axis the string with phase bits $\varphi_1$, $\varphi_2$ and on the right the actual phase $\varphi$ in decimal representation.As we expected we have found $\varphi=\frac{1}{4}=0.25$ with a $100\%$ probability. IPE example with a 2-qubit gateNow, we want to apply the IPE algorithm to estimate the phase for a 2-qubit gate $U$. For this example, let's consider the controlled version of the $T$ gate, i.e. the gate $U=\textrm{Controlled-}T$ (that from now we will express more compactly with $CT$). Its matrix is$$ CT = \begin{bmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & e^\frac{i\pi}{4}\\ \end{bmatrix} $$That is, the $CT$ gate adds a phase $\pi/4$ to the state $|11\rangle$, leaving unchanged the phase of the other computational basis states $|00\rangle$, $|01\rangle$, $|10\rangle$.Let's consider to estimate the phase $\phi=\pi/4$ for the eigenstate $|11\rangle$, we should find $\varphi=1/8$, since $\phi = 2 \pi \varphi$. Therefore to estimate the phase we need exactly 3 classical bits, i.e. $m=3$, since $1/2^3=1/8$. So $\varphi=0.\varphi_1\varphi_2\varphi_3$.As done with the example for the 1-qubit $U$ operator we will go through the same steps but this time we will have $3$ steps since $m=3$, and we will not repeat all the explanations. So for details see the above example for 1-qubit $U$ gate.First, we initialize the circuit with 3 qubits, 1 for the auxiliary qubit and 2 for the 2-qubit gate, and 3 classical bits to store the phase bits $\varphi_1$, $\varphi_2$, $\varphi_3$.
###Code
nq = 3 # number of qubits
m = 3 # number of classical bits
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, to estimate the least significant phase bit $\varphi_m=\varphi_3$. InitializationWe initialize the auxiliary qubit and the other qubits with the eigenstate $|11\rangle$.
###Code
qc.h(0)
qc.x([1,2])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply multiple times the $CU$ operator, that, in this example, is the Controlled-$CT$ gate ($CCT$ for short).To implement $CCT$ in the circuit, since $T$ is a phase gate, we can use the multi-controlled phase gate $\text{MCP}(\theta)$, with $\theta=\pi/4$.
###Code
cu_circ = QuantumCircuit(nq)
cu_circ.mcp(pi/4,[0,1],2)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{MCP}(\pi/4)$. Since for the first step $t=m-1$ and $m=3$, we have $2^t=4$.
###Code
for _ in range(2**(m-1)):
qc.mcp(pi/4,[0,1],2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurement of the auxiliary qubit in x-basis.We can use the `x_measurement` function defined above in the example for 1-qubit gate. In this way we have obtained the phase bit $\varphi_3$ and stored it in the classical bit $c_0$.
###Code
x_measurement(qc, q[0], c[0])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd, 3rd)Now we build the quantum circuit for the other remaining steps, the second and the third ones.As said in the first example, in these steps we have the additional sub-step of the phase correction. Initialization with reset
###Code
qc.reset(0)
qc.h(0)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)In order to extract the phase bit $\varphi_{2}$, we perform a phase correction of $-\pi\varphi_3/2$.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_3$).
###Code
qc.p(-pi/2,0).c_if(c,1)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=2$. So we apply $\text{MCP}(\pi/4)$ $2$ times. And then we perform the x-measurement of the qubit $q_0$, storing the phase bit $\varphi_2$ in the bit $c_1$.
###Code
for _ in range(2**(m-2)):
qc.mcp(pi/4,[0,1],2)
x_measurement(qc, q[0], c[1])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
All substeps of the 3rd stepFor the 3rd and last step, we perform the reset and initialization of the auxiliary qubit as done in the second step.Then at the 3rd step we have to perform the phase correction of $-2\pi 0.0\varphi_{2}\varphi_{3}= -2\pi \left(\frac{\varphi_2}{4}+\frac{\varphi_3}{8}\right)=-\frac{\varphi_2\pi}{2}-\frac{ \varphi_3\pi}{4}$, thus we have to apply 2 conditioned phase corrections, one conditioned by $\varphi_3$ ($=c_0$) and the other by $\varphi_2$($=c_1$). To do this we have to apply the following:- gate $P(-\pi/4)$ conditioned by $c_0=1$, that is, by $c=001$ (c_if with value $1$)- gate $P(-\pi/2)$ conditioned by $c_1=1$, that is, the gate is applied when $c=010$ (c_if with values $2$)- gate $P(-3\pi/4)$ conditioned by $c_1=1$ and $c_0=1$ that is, the gate is applied when $c=011$ (c_if with values $3$)Next, the $CU$ operations: we apply $2^t$ times the $\text{MCP}(\pi/4)$ gate and since at the 3rd step $t=m-3=0$, we apply the gate only once.
###Code
# initialization of qubit q0
qc.reset(0)
qc.h(0)
# phase correction
qc.p(-pi/4,0).c_if(c,1)
qc.p(-pi/2,0).c_if(c,2)
qc.p(-3*pi/2,0).c_if(c,3)
# c-U operations
for _ in range(2**(m-3)):
qc.mcp(pi/4,[0,1],2)
# X measurement
qc.h(0)
qc.measure(0,2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Now, we execute the circuit with the simulator without noise.
###Code
count0 = execute(qc, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
fig.tight_layout()
###Output
_____no_output_____
###Markdown
We have obtained $100\%$ probability to find $\varphi=0.125$, that is, $1/8$, as expected.
###Code
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
/opt/miniconda3/envs/qiskit/lib/python3.9/site-packages/qiskit/aqua/operators/operator_globals.py:48: DeprecationWarning: `from_label` is deprecated and will be removed no earlier than 3 months after the release date. Use Pauli(label) instead.
X = make_immutable(PrimitiveOp(Pauli.from_label('X')))
###Markdown
Iterative Quantum Phase Estimation AlgorithmThe goal of this tutorial is to understand how the Iterative Phase Estimation (IPE) algorithm works, why would we use the IPE algorithm instead of the QPE (Quantum Phase Estimation) algorithm and how to build it with Qiskit using the same circuit exploiting reset gate and the `c_if` method that allows to apply gates conditioned by the values stored in a classical register, resulting from previous measurements.**References**- [Section 2 of Lab 4: Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm) - [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html)
###Code
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute, assemble, Aer
from qiskit.tools.visualization import plot_histogram
from math import pi
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Conditioned gates: the c_if method Before starting the IPE algorithm, we will give a brief tutorial about the Qiskit conditional method, c_if, as it goes into building the IPE circuit.`c_if` is a function (actually a method of the gate class) to perform conditioned operations based on the value stored previously in a classical register. With this feature you can apply gates after a measurement in the same circuit conditioned by the measurement outcome.For example, the following code will execute the $X$ gate if the value of the classical register is $0$.
###Code
q = QuantumRegister(1,'q')
c = ClassicalRegister(1,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.measure(0,0)
qc.x(0).c_if(c, 0)
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We highlight that the method c_if expects as the first argument a whole classical register, not a single classical bit (or a list of classical bits), and as the second argument a value in decimal representation (a non-negative integer), not the value of a single bit, 0, or 1 (or a list/string of binary digits).Let's make another example. Consider that we want to perform a bit flip on the third qubit after the measurements in the following circuit, when the results of the measurement of $q_0$ and $q_1$ are both $1$.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.h(q[1])
qc.h(q[2])
qc.barrier()
qc.measure(q,c)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
We want to apply the $X$ gate, only if both the results of the measurement of $q_1$ and $q_2$ are $1$. We can do this using the c_if method, conditioning the application of $X$ depending on the value passed as argument to c_if.We will have to encode the value to pass to the c_if method such that it will check the values 011 and 111 (in binary representation), since it does not matter what is in the rightmost position.The 2 integer values in decimal representation: We can check the solutions using the bin() method in python (the prefix `0b` indicates the binary format).
###Code
print(bin(3))
print(bin(7))
###Output
0b11
0b111
###Markdown
So we have to apply $X$ to $q_2$ using c_if two times, one for each value corresponding to 011 and 111.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.h(1)
qc.h(2)
qc.barrier()
qc.measure(q,c)
qc.x(2).c_if(c, 3) # for the 011 case
qc.x(2).c_if(c, 7) # for the 111 case
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
IPEThe motivation for using the IPE algorithm is that QPE algorithm works fine for short depth circuits but when the circuit starts to grow, it doesn't work properly due to gate noise and decoherence times.The detailed explanation of how the algorithm works can be found in [Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm). To understand QPE in depth, you can see also [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html). IPE example with a 1-qubit gate for $U$We want to apply the IPE algorithm to estimate the phase for a 1-qubit operator $U$. For example, here we use the $S$-gate.Let's apply the IPE algorithm to estimate the phase for $S$-gate.Its matrix is $$ S = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{2}\\ \end{bmatrix}$$That is, the $S$-gate adds a phase $\pi/2$ to the state $|1\rangle$, leaving unchanged the phase of the state $|0\rangle$$$ S|1\rangle = e^\frac{i\pi}{2}|1\rangle $$In the following, we will use the notation and terms used in [Section 2 of lab 4](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm).Let's consider to estimate the phase $\phi=\frac{\pi}{2}$ for the eigenstate $|1\rangle$, we should find $\varphi=\frac{1}{4}$ (where $\phi = 2 \pi \varphi$). Therefore to estimate the phase we need exactly 2 phase bits, i.e. $m=2$, since $1/2^2=1/4$. So $\varphi=0.\varphi_1\varphi_2$.Remember from the theory that for the IPE algorithm, $m$ is also the number of iterations, so we need only $2$ iterations or steps.First, we initialize the circuit. IPE works with only 1 auxiliary qubit, instead of $m$ counting qubits of the QPE algorithm. Therefore, we need 2 qubits, 1 auxiliary qubit and 1 for the eigenstate of $U$-gate, and a classical register of 2 bits, for the phase bits $\varphi_1$, $\varphi_2$.
###Code
nq = 2
m = 2
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc_S = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, that is, the first iteration of the algorithm, to estimate the least significant phase bit $\varphi_m$, in this case $\varphi_2$. For the first step we have 3 sub-steps:- initialization- application of the Controlled-$U$ gates- measure of the auxiliary qubit in x-basis InitializationThe initialization consists of application the Hadamard gate to the auxiliary qubit and the preparation of the eigenstate $|1\rangle$.
###Code
qc_S.h(0)
qc_S.x(1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply $2^t$ times the Controlled-$U$ operators (see also in the docs [Two qubit gates](https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.htmlTwo-qubit-gates)), that, in this example, is the Controlled-$S$ gate ($CS$ for short).To implement $CS$ in the circuit, since $S$ is a phase gate, we can use the controlled phase gate $\text{CP}(\theta)$, with $\theta=\pi/2$.
###Code
cu_circ = QuantumCircuit(2)
cu_circ.cp(pi/2,0,1)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{CP}(\pi/2)$. Since for the first step $t=m-1$, and $m=2$, we have $2^t=2$.
###Code
for _ in range(2**(m-1)):
qc_S.cp(pi/2,0,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurement of the auxiliary qubit in x-basis. So we will define a function to perform the x_measure and then apply it.
###Code
def x_measurement(qc, qubit, cbit):
"""Measure 'qubit' in the X-basis, and store the result in 'cbit'"""
qc.h(qubit)
qc.measure(qubit, cbit)
###Output
_____no_output_____
###Markdown
In this way we obtain the phase bit $\varphi_2$ and store it in the classical bit $c_0$.
###Code
x_measurement(qc_S, q[0], c[0])
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd step)Now we build the quantum circuit for the other remaining steps, in this example, only the second one.In these steps we have 4 sub-steps: the 3 sub-steps as in the first step and, in the middle, the additional step of the phase correction- initialization with reset- phase correction- application of the Controlled-$U$ gates- measure of the auxiliary qubit in x-basis Initialization with resetAs we want to perform an iterative algorithm in the same circuit, we need to reset the auxiliary qubit $q0$ after the measument gate and initialize it again as before to recycle the qubit.
###Code
qc_S.reset(0)
qc_S.h(0)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)As seen in the theory, in order to extract the phase bit $\varphi_{1}$, we perform a phase correction of $-\pi\varphi_2/2$.Of course, we need to apply the phase correction in the circuit only if the phase bit $\varphi_2=1$, i.e. we have to apply the phase correction of $-\pi/2$ only if the classical bit $c_0$ is 1.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_2$) using the `c_if` method.So as we saw in the first part of this tutorial, we have to use the `c_if` method with a value of 1, as $1_{10} = 001_{2}$ (the subscripts $_{10}$ and $_2$ indicate the decimal and binary representations).
###Code
qc_S.p(-pi/2,0).c_if(c,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=1$. So we apply $\text{CP}(\pi/2)$ once. And then we perform the x-measurement of the qubit $q_0$, storing the result, the phase bit $\varphi_1$, in the bit $c_1$ of classical register.
###Code
## 2^t c-U operations (with t=m-2)
for _ in range(2**(m-2)):
qc_S.cp(pi/2,0,1)
x_measurement(qc_S, q[0], c[1])
###Output
_____no_output_____
###Markdown
Et voilà, we have our final circuit
###Code
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's execute the circuit with the `qasm_simulator`, the simulator without noise that run locally.
###Code
sim = Aer.get_backend('qasm_simulator')
count0 = execute(qc_S, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
In the picture we have the same histograms but on the left we have on the x-axis the string with phase bits $\varphi_1$, $\varphi_2$ and on the right the actual phase $\varphi$ in decimal representation.As we expected we have found $\varphi=\frac{1}{4}=0.25$ with a $100\%$ probability. IPE example with a 2-qubit gateNow, we want to apply the IPE algorithm to estimate the phase for a 2-qubit gate $U$. For this example, let's consider the controlled version of the $T$ gate, i.e. the gate $U=\textrm{Controlled-}T$ (that from now we will express more complactly with $CT$). Its matrix is$$ CT = \begin{bmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & e^\frac{i\pi}{4}\\ \end{bmatrix} $$That is, the $CT$ gate adds a phase $\pi/4$ to the state $|11\rangle$, leaving unchanged the phase of the other computational basis states $|00\rangle$, $|01\rangle$, $|10\rangle$.Let's consider to estimate the phase $\phi=\pi/4$ for the eigenstate $|11\rangle$, we should find $\varphi=1/8$, since $\phi = 2 \pi \varphi$. Therefore to estimate the phase we need exactly 3 classical bits, i.e. $m=3$, since $1/2^3=1/8$. So $\varphi=0.\varphi_1\varphi_2\varphi_3$.As done with the example for the 1-qubit $U$ operator we will go through the same steps but this time we will have $3$ steps since $m=3$, and we will not repeat all the explanations. So for details see the above example for 1-qubit $U$ gate.First, we initialize the circuit with 3 qubits, 1 for the auxiliary qubit and 2 for the 2-qubit gate, and 3 classical bits to store the phase bits $\varphi_1$, $\varphi_2$, $\varphi_3$.
###Code
nq = 3 # number of qubits
m = 3 # number of classical bits
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, to estimate the least significant phase bit $\varphi_m=\varphi_3$. InitializationWe initialize the auxiliary qubit and the other qubits with the eigenstate $|11\rangle$.
###Code
qc.h(0)
qc.x([1,2])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply multiple times the $CU$ operator, that, in this example, is the Controlled-$CT$ gate ($CCT$ for short).To implement $CCT$ in the circuit, since $T$ is a phase gate, we can use the multi-controlled phase gate $\text{MCP}(\theta)$, with $\theta=\pi/4$.
###Code
cu_circ = QuantumCircuit(nq)
cu_circ.mcp(pi/4,[0,1],2)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{MCP}(\pi/4)$. Since for the first step $t=m-1$ and $m=3$, we have $2^t=4$.
###Code
for _ in range(2**(m-1)):
qc.mcp(pi/4,[0,1],2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurement of the auxiliary qubit in x-basis.We can use the `x_measurement` function defined above in the example for 1-qubit gate. In this way we have obtained the phase bit $\varphi_3$ and stored it in the classical bit $c_0$.
###Code
x_measurement(qc, q[0], c[0])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd, 3rd)Now we build the quantum circuit for the other remaining steps, the second and the third ones.As said in the first example, in these steps we have the additional sub-step of the phase correction. Initialization with reset
###Code
qc.reset(0)
qc.h(0)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)In order to extract the phase bit $\varphi_{2}$, we perform a phase correction of $-\pi\varphi_3/2$.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_3$).
###Code
qc.p(-pi/2,0).c_if(c,1)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=2$. So we apply $\text{MCP}(\pi/4)$ $2$ times. And then we perform the x-measurement of the qubit $q_0$, storing the phase bit $\varphi_2$ in the bit $c_1$.
###Code
for _ in range(2**(m-2)):
qc.mcp(pi/4,[0,1],2)
x_measurement(qc, q[0], c[1])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
All substeps of the 3rd stepFor the 3rd and last step, we perform the reset and initialization of the auxiliary qubit as done in the second step.Then at the 3rd step we have to perform the phase correction of $-2\pi 0.0\varphi_{2}\varphi_{3}= -2\pi \left(\frac{\varphi_2}{4}+\frac{\varphi_3}{8}\right)=-\frac{\varphi_2\pi}{2}-\frac{ \varphi_3\pi}{4}$, thus we have to apply 2 conditioned phase corrections, one conditioned by $\varphi_3$ ($=c_0$) and the other by $\varphi_2$($=c_1$). To do this we have to apply the following:- gate $P(-\pi/4)$ conditioned by $c_0=1$, that is, by $c=001$ (c_if with value $1$)- gate $P(-\pi/2)$ conditioned by $c_1=1$, that is, the gate is applied when $c=010$ (c_if with values $2$)- gate $P(-3\pi/4)$ conditioned by $c_1=1$ and $c_0=1$ that is, the gate is applied when $c=011$ (c_if with values $3$)Next, the $CU$ operations: we apply $2^t$ times the $\text{MCP}(\pi/4)$ gate and since at the 3rd step $t=m-3=0$, we apply the gate only once.
###Code
# initialization of qubit q0
qc.reset(0)
qc.h(0)
# phase correction
qc.p(-pi/4,0).c_if(c,1)
qc.p(-pi/2,0).c_if(c,2)
qc.p(-3*pi/2,0).c_if(c,3)
# c-U operations
for _ in range(2**(m-3)):
qc.mcp(pi/4,[0,1],2)
# X measurement
qc.h(0)
qc.measure(0,2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Now, we execute the circuit with the simulator without noise.
###Code
count0 = execute(qc, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
fig.tight_layout()
###Output
_____no_output_____
###Markdown
We have obtained $100\%$ probability to find $\varphi=0.125$, that is, $1/8$, as expected.
###Code
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
/opt/miniconda3/envs/qiskit/lib/python3.9/site-packages/qiskit/aqua/operators/operator_globals.py:48: DeprecationWarning: `from_label` is deprecated and will be removed no earlier than 3 months after the release date. Use Pauli(label) instead.
X = make_immutable(PrimitiveOp(Pauli.from_label('X')))
###Markdown
Iterative Quantum Phase Estimation AlgorithmThe goal of this tutorial is to understand how the Iterative Phase Estimation (IPE) algorithm works, why would we use the IPE algorithm instead of the QPE (Quantum Phase Estimation) algorithm and how to build it with Qiskit using the same circuit exploiting reset gate and the `c_if` method that allows to apply gates conditioned by the values stored in a classical register, resulting from previus measurements.**References**- [Section 2 of Lab 4: Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm) - [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html)
###Code
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute, assemble, Aer
from qiskit.tools.visualization import plot_histogram
from math import pi
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Conditined gates: the c_if method Before starting the IPE algorithm, we will give a brief tutorial about the Qiskit conditional method, c_if, as it goes into building the IPE circuit.`c_if` is a function (actually a method of the gate class) to perform conditioned operations based on the value stored previously in a classical register. With this feature you can apply gates after a measurement in the same circuit conditioned by the measuement outcome.For example, the following code will execute the $X$ gate if the value of the classical register is $0$.
###Code
q = QuantumRegister(1,'q')
c = ClassicalRegister(1,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.measure(0,0)
qc.x(0).c_if(c, 0)
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
We highlight that the method c_if expects as the first argument a whole classical register, not a single classical bit (or a list of classical bits), and as the second argument a value in decimal representation (a non-negative integer), not the value of a single bit, 0, or 1 (or a list/string of binary digits).Let's make another example. Consider that we want to perform a bit flip on the third qubit after the measurements in the following circuit, when the results of the measurement of $q_0$ and $q_1$ are both $1$.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.h(q[1])
qc.h(q[2])
qc.barrier()
qc.measure(q,c)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
We want to apply the $X$ gate, only if both the results of the measurement of $q_1$ and $q_2$ are $1$. We can do this using the c_if method, conditioning the application of $X$ depending on the value passed as argument to c_if.We will have to encode the value to pass to the c_if method such that it will check the values 011 and 111 (in binary representation), since it does not matter what is in the rightmost position.The 2 integer values in decimal representation: We can check the solutions using the bin() method in python (the prefix `0b` indicates the binary format).
###Code
print(bin(3))
print(bin(7))
###Output
0b11
0b111
###Markdown
So we have to apply $X$ to $q_2$ using c_if two times, one for each value corresponding to 011 and 111.
###Code
q = QuantumRegister(3,'q')
c = ClassicalRegister(3,'c')
qc = QuantumCircuit(q, c)
qc.h(0)
qc.h(1)
qc.h(2)
qc.barrier()
qc.measure(q,c)
qc.x(2).c_if(c, 3) # for the 011 case
qc.x(2).c_if(c, 7) # for the 111 case
qc.draw(output='mpl')
###Output
_____no_output_____
###Markdown
IPEThe motivation for using the IPE algorithm is that QPE algorithm works fine for short depth circuits but when the circuit starts to grow, it doesn't work properly due to gate noise and decoherence times.The detailed explanation of how the algorithm works can be found in [Iterative Phase Estimation (IPE) Algorithm](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm). To understand QPE in depth, you can see also [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html). IPE example with a 1-qubit gate for $U$We want to apply the IPE algorithm to estimate the phase for a 1-qubit operator $U$. For example, here we use the $S$-gate.Let's apply the IPE algorithm to estimate the phase for $S$-gate.Its matrix is $$ S = \begin{bmatrix}1 & 0\\0 & e^\frac{i\pi}{2}\\ \end{bmatrix}$$That is, the $S$-gate adds a phase $\pi/2$ to the state $|1\rangle$, leaving unchanged the phase of the state $|0\rangle$$$ S|1\rangle = e^\frac{i\pi}{2}|1\rangle $$In the following, we will use the notation and terms used in [Section 2 of lab 4](https://qiskit.org/textbook/ch-labs/Lab04_IterativePhaseEstimation.html2-iterative-phase-estimation-ipe-algorithm).Let's consider to estimate the phase $\phi=\frac{\pi}{2}$ for the eigenstate $|1\rangle$, we should find $\varphi=\frac{1}{4}$ (where $\phi = 2 \pi \varphi$). Therefore to estimate the phase we neeed exactly 2 phase bits, i.e. $m=2$, since $1/2^2=1/4$. So $\varphi=0.\varphi_1\varphi_2$.Remember from the theory that for the IPE algorithm, $m$ is also the number of iterations, so we need only $2$ iterations or steps.First, we initialize the circuit. IPE works with only 1 auxiliary qubit, instead of $m$ counting qubits of the QPE algorithm. Therefore, we need 2 qubits, 1 auxiliary qubit and 1 for the eigenstate of $U$-gate, and a classical register of 2 bits, for the phase bits $\varphi_1$, $\varphi_2$.
###Code
nq = 2
m = 2
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc_S = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, that is, the first iteration of the algorithm, to estimate the least significant phase bit $\varphi_m$, in this case $\varphi_2$. For the first step we have 3 sub-steps:- initialization- application of the Controlled-$U$ gates- measure of the auxiliary qubit in x-basis InitializationThe initialization consists of application the Hadamard gate to the auxiliary qubit and the preparation of the eigenstate $|1\rangle$.
###Code
qc_S.h(0)
qc_S.x(1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply $2^t$ times the Controlled-$U$ operators (see also in the docs [Two qubit gates](https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.htmlTwo-qubit-gates)), that, in this example, is the Controlled-$S$ gate ($CS$ for short).To implement $CS$ in the circuit, since $S$ is a phase gate, we can use the controlled phase gate $\text{CP}(\theta)$, with $\theta=\pi/2$.
###Code
cu_circ = QuantumCircuit(2)
cu_circ.cp(pi/2,0,1)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{CP}(\pi/2)$. Since for the first step $t=m-1$, and $m=2$, we have $2^t=2$.
###Code
for _ in range(2**(m-1)):
qc_S.cp(pi/2,0,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurenment of the auxiliary qubit in x-basis. So we will define a function to perform the x_measure and then apply it.
###Code
def x_measurement(qc, qubit, cbit):
"""Measure 'qubit' in the X-basis, and store the result in 'cbit'"""
qc.h(qubit)
qc.measure(qubit, cbit)
###Output
_____no_output_____
###Markdown
In this way we obtain the phase bit $\varphi_2$ and store it in the classical bit $c_0$.
###Code
x_measurement(qc_S, q[0], c[0])
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd step)Now we build the quantum circuit for the other remaining steps, in this example, only the second one.In these steps we have 4 sub-steps: the 3 sub-steps as in the first step and, in the middle, the additional step of the phase correction- initialization with reset- phase correction- application of the Control-$U$ gates- measure of the auxiliary qubit in x-basis Initialization with resetAs we want to perform an iterative algorithm in the same circuit, we need to reset the auxiliary qubit $q0$ after the measument gate and initialize it again as before to recycle the qubit.
###Code
qc_S.reset(0)
qc_S.h(0)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)As seen in the theory, in order to extract the phase bit $\varphi_{1}$, we perform a phase correction of $-\pi\varphi_2/2$.Of course, we need to apply the phase correction in the circuit only if the phase bit $\varphi_2=1$, i.e. we have to apply the phase correction of $-\pi/2$ only if the classical bit $c_0$ is 1.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_2$) using the `c_if` method.So as we saw in the first part of this tutorial, we have to use the `c_if` method with a value of 1, as $1_{10} = 001_{2}$ (the subscripts $_{10}$ and $_2$ indicate the decimal and binary representations).
###Code
qc_S.p(-pi/2,0).c_if(c,1)
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Control-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=1$. So we apply $\text{CP}(\pi/2)$ once. And then we perform the x-measurment of the qubit $q_0$, storing the result, the phase bit $\varphi_1$, in the bit $c_1$ of classical register.
###Code
## 2^t c-U operations (with t=m-2)
for _ in range(2**(m-2)):
qc_S.cp(pi/2,0,1)
x_measurement(qc_S, q[0], c[1])
###Output
_____no_output_____
###Markdown
Et voilà, we have our final circuit
###Code
qc_S.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's execute the circuit with the `qasm_simulator`, the simulator without noise that run locally.
###Code
sim = Aer.get_backend('qasm_simulator')
count0 = execute(qc_S, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
In the picture we have the same histograms but on the left we have on the x-axis the string with phase bits $\varphi_1$, $\varphi_2$ and on the right the actual phase $\varphi$ in decimal representation.As we expected we have found $\varphi=\frac{1}{4}=0.25$ with a $100\%$ probability. IPE example with a 2-qubit gateNow, we want to apply the IPE algorithm to estimate the phase for a 2-qubit gate $U$. For this example, let's consider the controlled version of the $T$ gate, i.e. the gate $U=\textrm{Controlled-}T$ (that from now we will express more complactly with $CT$). Its matrix is$$ CT = \begin{bmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & e^\frac{i\pi}{4}\\ \end{bmatrix} $$That is, the $CT$ gate adds a phase $\pi/4$ to the state $|11\rangle$, leaving unchanged the phase of the other computational basis states $|00\rangle$, $|01\rangle$, $|10\rangle$.Let's consider to estimate the phase $\phi=\pi/4$ for the eigenstate $|11\rangle$, we should find $\varphi=1/8$, since $\phi = 2 \pi \varphi$. Therefore to estimate the phase we neeed exactly 3 classical bits, i.e. $m=3$, since $1/2^3=1/8$. So $\varphi=0.\varphi_1\varphi_2\varphi_3$.As done with the example for the 1-qubit $U$ operator we will go through the same steps but this time we will have $3$ steps since $m=3$, and we will not repeat all the explanations. So for details see the above example for 1-qubit $U$ gate.First, we initialize the circuit with 3 qbuits, 1 for the auxiliary qubit and 2 for the 2-qubit gate, and 3 classical bits to store the phase bits $\varphi_1$, $\varphi_2$, $\varphi_3$.
###Code
nq = 3 # number of qubits
m = 3 # number of classical bits
q = QuantumRegister(nq,'q')
c = ClassicalRegister(m,'c')
qc = QuantumCircuit(q,c)
###Output
_____no_output_____
###Markdown
First stepNow we build the quantum circuit for the first step, to estimate the least significant phase bit $\varphi_m=\varphi_3$. InitializationWe inizialize the auxiliary qubit and the other qubits with the eigenstate $|11\rangle$.
###Code
qc.h(0)
qc.x([1,2])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Controlled-$U$ gatesThen we have to apply multiple times the $CU$ operator, that, in this example, is the Controlled-$CT$ gate ($CCT$ for short).To implement $CCT$ in the circuit, since $T$ is a phase gate, we can use the multi-controlled phase gate $\text{MCP}(\theta)$, with $\theta=\pi/4$.
###Code
cu_circ = QuantumCircuit(nq)
cu_circ.mcp(pi/4,[0,1],2)
cu_circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's apply $2^t$ times $\text{MCP}(\pi/4)$. Since for the first step $t=m-1$ and $m=3$, we have $2^t=4$.
###Code
for _ in range(2**(m-1)):
qc.mcp(pi/4,[0,1],2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Measure in x-basisFinally, we perform the measurenment of the auxiliary qubit in x-basis.We can use the `x_measurement` function defined above in the example for 1-qubit gate. In this way we have obtained the phase bit $\varphi_3$ and stored it in the classical bit $c_0$.
###Code
x_measurement(qc, q[0], c[0])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Subsequent steps (2nd, 3rd)Now we build the quntum circuit for the other remaining steps, the second and the third ones.As said in the first example, in these steps we have the additional sub-step of the phase correction. Initialization with reset
###Code
qc.reset(0)
qc.h(0)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Phase correction (for step 2)In order to extract the phase bit $\varphi_{2}$, we perform a phase correction of $-\pi\varphi_3/2$.So, after the reset we apply the phase gate $P(\theta)$ with phase $\theta=-\pi/2$ conditioned by the classical bit $c_0$ ($=\varphi_3$).
###Code
qc.p(-pi/2,0).c_if(c,1)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Application of the Control-$U$ gates and x-measurement (for step 2)We apply the $CU$ operations as we did in the first step. For the second step we have $t=m-2$, hence $2^t=2$. So we apply $\text{MCP}(\pi/4)$ $2$ times. And then we perform the x-measurment of the qubit $q_0$, storing the phase bit $\varphi_2$ in the bit $c_1$.
###Code
for _ in range(2**(m-2)):
qc.mcp(pi/4,[0,1],2)
x_measurement(qc, q[0], c[1])
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
All substeps of the 3rd stepFor the 3rd and last step, we perform the reset and initialization of the auxiliary qubit as done in the second step.Then at the 3rd step we have to perform the phase correction of $-2\pi 0.0\varphi_{2}\varphi_{3}= -2\pi \left(\frac{\varphi_2}{4}+\frac{\varphi_3}{8}\right)=-\frac{\varphi_2\pi}{2}-\frac{ \varphi_3\pi}{4}$, thus we have to apply 2 conditioned phase corrections, one conditioned by $\varphi_3$ ($=c_0$) and the other by $\varphi_2$($=c_1$). To do this we have to apply the following:- gate $P(-\pi/4)$ conditioned by $c_0=1$, that is, by $c=001$ (c_if with vaule 1)- gate $P(-\pi/2)$ conditioned by $c_1=1$, that is, the gate is applied when $c=010$ (c_if with values $2$)- gate $P(-3\pi/4)$ conditioned by $c_1=1$ and $c_0=1$ that is, the gate is applied when $c=011$ (c_if with values $3$)Next, the $CU$ operations: we apply $2^t$ times the $\text{MCP}(\pi/4)$ gate and since at the 3rd step $t=m-3=0$, we apply the gate only once.
###Code
# initialization of qubit q0
qc.reset(0)
qc.h(0)
# phase correction
qc.p(-pi/4,0).c_if(c,1)
qc.p(-pi/2,0).c_if(c,2)
qc.p(-3*pi/2,0).c_if(c,3)
# c-U operations
for _ in range(2**(m-3)):
qc.mcp(pi/4,[0,1],2)
# X measurement
qc.h(0)
qc.measure(0,2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Now, we execute the circuit with the simulator without noise.
###Code
count0 = execute(qc, sim).result().get_counts()
key_new = [str(int(key,2)/2**m) for key in list(count0.keys())]
count1 = dict(zip(key_new, count0.values()))
fig, ax = plt.subplots(1,2)
plot_histogram(count0, ax=ax[0])
plot_histogram(count1, ax=ax[1])
fig.tight_layout()
###Output
_____no_output_____
###Markdown
We have obtained $100\%$ probability to find $\varphi=0.125$, that is, $1/8$, as expected.
###Code
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
/opt/miniconda3/envs/qiskit/lib/python3.9/site-packages/qiskit/aqua/operators/operator_globals.py:48: DeprecationWarning: `from_label` is deprecated and will be removed no earlier than 3 months after the release date. Use Pauli(label) instead.
X = make_immutable(PrimitiveOp(Pauli.from_label('X')))
|
tutorials/4-Optimization/FinRL_HyperparameterTuning_Optuna.ipynb | ###Markdown
INTRODUCTION1. This tutorial introduces *trade-based metrics* for hyperparameter optimization of FinRL models.2. As the name implies, trade-based metrics are associated with the trade activity that FinRL captures in its actions tables. In general, a trade is represented by an entry in an actions file.2. Such metrics include counts of winning and losing trades, total value of wins and losses and ratio of average market value of wins to losses.1. In this tutorial, we will be tuning hyperparameters for Stable baselines3 models using Optuna.2. The default model hyperparameters may not be adequate for your custom portfolio or custom state-space. Reinforcement learning algorithms are sensitive to hyperparameters, hence tuning is an important step.3. Hyperparamters are tuned based on an objective, which needs to be maximized or minimized. ***In this tutorial, the ratio of average winning to losing trade value is used as the objective.*** This ratio is to be ***maximized***.3. This tutorial incorporates a multi-stock framework based on the 30 stocks (aka tickers) in the DOW JONES Industrial Average. Trade metrics are calculated for each ticker and then aggregated.7.**IMPORTANT**: While the DOW stocks represent a portfolio, portfolio optimization techniques, such as the classic Markowitz mean-variance model, are not applied in this analysis. Other FinRL tutorials and examples demonstrate portfolio optimization.
###Code
#Installing FinRL
# Set colab status to trigger installs
clb = True
print(f'Preparing for colab: {clb}')
pkgs = ['FinRL', 'optuna', 'Ray/rllib','plotly','ipywidgets']
if clb:
print(f'Installing packages: {pkgs}')
# Set Variables
## Fixed
tpm_hist = {} # record tp metric values for trials
tp_metric = 'avgwl' # specified trade_param_metric: ratio avg value win/loss
## Settable by User
n_trials = 5 # number of HP optimization runs
total_timesteps = 2000 # per HP optimization run
## Logging callback params
lc_threshold=1e-5
lc_patience=15
lc_trial_number=5
%%capture
if clb:
# installing packages
!pip install pyfolio-reloaded #original pyfolio no longer maintained
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
!pip install optuna
!pip install -U "ray[rllib]"
!pip install plotly
!pip install ipywidgets
!pip install -U kaleido # enables saving plots to file
#Importing the libraries
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# matplotlib.use('Agg')
import datetime
import optuna
from pathlib import Path
from google.colab import files
%matplotlib inline
from finrl import config
from finrl import config_tickers
from optuna.integration import PyTorchLightningPruningCallback
from finrl.finrl_meta.preprocessor.yahoodownloader import YahooDownloader
from finrl.finrl_meta.preprocessor.preprocessors import FeatureEngineer, data_split
from finrl.finrl_meta.env_stock_trading.env_stocktrading import StockTradingEnv
from finrl.finrl_meta.env_stock_trading.env_stocktrading_np import StockTradingEnv as StockTradingEnv_numpy
from finrl.drl_agents.stablebaselines3.models import DRLAgent
from finrl.drl_agents.rllib.models import DRLAgent as DRLAgent_rllib
from finrl.finrl_meta.data_processor import DataProcessor
import joblib
from finrl.plot import backtest_stats, backtest_plot, get_daily_return, get_baseline
import ray
from pprint import pprint
import kaleido
import sys
sys.path.append("../FinRL-Library")
import itertools
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
print(f'Torch device: {device}')
###Output
_____no_output_____
###Markdown
Zipline was developed by Quantopian, which also created pyfolio. The latter is used in FinRL to calculate and display backtest results. Despite the unavailability of zipline, as reported above, pyfolio remains operational. See [here](https://github.com/quantopian/pyfolio/issues/654) for more information.
###Code
## Connect to GPU for faster processing
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Not connected to a GPU')
else:
print(gpu_info)
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
COLLECTING DATA AND PREPROCESSING1. Load DOW 30 prices2. Add technical indicators3. Create *processed_full*, the final data set for training and testingTo save time in multiple runs, if the processed_full file is available, it is read from a previously saved csv file.
###Code
#Custom ticker list dataframe download
#TODO save df to avoid download
path_pf = '/content/ticker_data.csv'
if Path(path_pf).is_file():
print('Reading ticker data')
df = pd.read_csv(path_pf)
else:
print('Downloading ticker data')
ticker_list = config_tickers.DOW_30_TICKER
df = YahooDownloader(start_date = '2009-01-01',
end_date = '2021-10-01',
ticker_list = ticker_list).fetch_data()
df.to_csv('ticker_data.csv')
def create_processed_full(processed):
list_ticker = processed["tic"].unique().tolist()
list_date = list(pd.date_range(processed['date'].min(),processed['date'].max()).astype(str))
combination = list(itertools.product(list_date,list_ticker))
processed_full = pd.DataFrame(combination,columns=["date","tic"]).merge(processed,on=["date","tic"],how="left")
processed_full = processed_full[processed_full['date'].isin(processed['date'])]
processed_full = processed_full.sort_values(['date','tic'])
processed_full = processed_full.fillna(0)
processed_full.sort_values(['date','tic'],ignore_index=True).head(5)
processed_full.to_csv('processed_full.csv')
return processed_full
#You can add technical indicators and turbulence factor to dataframe
#Just set the use_technical_indicator=True, use_vix=True and use_turbulence=True
def create_techind():
fe = FeatureEngineer(
use_technical_indicator=True,
tech_indicator_list = config.TECHNICAL_INDICATORS_LIST,
use_vix=True,
use_turbulence=True,
user_defined_feature = False)
processed = fe.preprocess_data(df)
return processed
#Load price and technical indicator data from file if available
path_pf = '/content/processed_full.csv'
if Path(path_pf).is_file():
print('Reading processed_full data')
processed_full = pd.read_csv(path_pf)
else:
print('Creating processed_full file')
processed=create_techind()
processed_full=create_processed_full(processed)
train = data_split(processed_full, '2009-01-01','2020-07-01')
trade = data_split(processed_full, '2020-05-01','2021-10-01')
print(f'Number of training samples: {len(train)}')
print(f'Number of testing samples: {len(trade)}')
stock_dimension = len(train.tic.unique())
state_space = 1 + 2*stock_dimension + len(config.TECHNICAL_INDICATORS_LIST) * stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
#Defining the environment kwargs
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"buy_cost_pct": 0.001,
"sell_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
#Instantiate the training gym compatible environment
e_train_gym = StockTradingEnv(df = train, **env_kwargs)
#Instantiate the training environment
# Also instantiate our training gent
env_train, _ = e_train_gym.get_sb_env()
#print(type(env_train))
agent = DRLAgent(env = env_train)
#Instantiate the trading environment
e_trade_gym = StockTradingEnv(df = trade, turbulence_threshold = None, **env_kwargs)
###Output
_____no_output_____
###Markdown
TRADE PERFORMANCE CODEThe following code calculates trade performance metrics, which are then used as an objective for optimizing hyperparameter values. There are several available metrics. In this tutorial, the default choice is the ratio of average value of winning to losing trades.
###Code
#MAIN METHOD
# Calculates Trade Performance for Objective
# Called from objective method
# Returns selected trade perf metric(s)
# Requires actions and associated prices
def calc_trade_perf_metric(df_actions,
df_prices_trade,
tp_metric,
dbg=False):
df_actions_p, df_prices_p, tics = prep_data(df_actions.copy(),
df_prices_trade.copy())
# actions predicted by trained model on trade data
df_actions_p.to_csv('df_actions.csv')
# Confirms that actions, prices and tics are consistent
df_actions_s, df_prices_s, tics_prtfl = \
sync_tickers(df_actions_p.copy(),df_prices_p.copy(),tics)
# copy to ensure that tics from portfolio remains unchanged
tics = tics_prtfl.copy()
# Analysis is performed on each portfolio ticker
perf_data= collect_performance_data(df_actions_s, df_prices_s, tics)
# profit/loss for each ticker
pnl_all = calc_pnl_all(perf_data, tics)
# values for trade performance metrics
perf_results = calc_trade_perf(pnl_all)
df = pd.DataFrame.from_dict(perf_results, orient='index')
# calculate and return trade metric value as objective
m = calc_trade_metric(df,tp_metric)
print(f'Ratio Avg Win/Avg Loss: {m}')
k = str(len(tpm_hist)+1)
# save metric value
tpm_hist[k] = m
return m
# Supporting methods
def calc_trade_metric(df,metric='avgwl'):
'''# trades', '# wins', '# losses', 'wins total value', 'wins avg value',
'losses total value', 'losses avg value'''
# For this tutorial, the only metric available is the ratio of
# average values of winning to losing trades. Others are in development.
# some test cases produce no losing trades.
# The code below assigns a value as a multiple of the highest value during
# previous hp optimization runs. If the first run experiences no losses,
# a fixed value is assigned for the ratio
tpm_mult = 1.0
avgwl_no_losses = 25
if metric == 'avgwl':
if sum(df['# losses']) == 0:
try:
return max(tpm_hist.values())*tpm_mult
except ValueError:
return avgwl_no_losses
avg_w = sum(df['wins total value'])/sum(df['# wins'])
avg_l = sum(df['losses total value'])/sum(df['# losses'])
m = abs(avg_w/avg_l)
return m
def prep_data(df_actions,
df_prices_trade):
df=df_prices_trade[['date','close','tic']]
df['Date'] = pd.to_datetime(df['date'])
df = df.set_index('Date')
# set indices on both df to datetime
idx = pd.to_datetime(df_actions.index, infer_datetime_format=True)
df_actions.index=idx
tics = np.unique(df.tic)
n_tics = len(tics)
print(f'Number of tickers: {n_tics}')
print(f'Tickers: {tics}')
dategr = df.groupby('tic')
p_d={t:dategr.get_group(t).loc[:,'close'] for t in tics}
df_prices = pd.DataFrame.from_dict(p_d)
df_prices.index = df_prices.index.normalize()
return df_actions, df_prices, tics
# prepares for integrating action and price files
def link_prices_actions(df_a,
df_p):
cols_a = [t + '_a' for t in df_a.columns]
df_a.columns = cols_a
cols_p = [t + '_p' for t in df_p.columns]
df_p.columns = cols_p
return df_a, df_p
def sync_tickers(df_actions,df_tickers_p,tickers):
# Some DOW30 components may not be included in portfolio
# passed tickers includes all DOW30 components
# actions and ticker files may have different length indices
if len(df_actions) != len(df_tickers_p):
msng_dates = set(df_actions.index)^set(df_tickers_p.index)
try:
#assumption is prices has one additional timestamp (row)
df_tickers_p.drop(msng_dates,inplace=True)
except:
df_actions.drop(msng_dates,inplace=True)
df_actions, df_tickers_p = link_prices_actions(df_actions,df_tickers_p)
# identify any DOW components not in portfolio
t_not_in_a = [t for t in tickers if t + '_a' not in list(df_actions.columns)]
# remove t_not_in_a from df_tickers_p
drop_cols = [t + '_p' for t in t_not_in_a]
df_tickers_p.drop(columns=drop_cols,inplace=True)
# Tickers in portfolio
tickers_prtfl = [c.split('_')[0] for c in df_actions.columns]
return df_actions,df_tickers_p, tickers_prtfl
def collect_performance_data(dfa,dfp,tics, dbg=False):
perf_data = {}
# In current version, files columns include secondary identifier
for t in tics:
# actions: purchase/sale of DOW equities
acts = dfa['_'.join([t,'a'])].values
# ticker prices
prices = dfp['_'.join([t,'p'])].values
# market value of purchases/sales
tvals_init = np.multiply(acts,prices)
d={'actions':acts, 'prices':prices,'init_values':tvals_init}
perf_data[t]=d
return perf_data
def calc_pnl_all(perf_dict, tics_all):
# calculate profit/loss for each ticker
print(f'Calculating profit/loss for each ticker')
pnl_all = {}
for tic in tics_all:
pnl_t = []
tic_data = perf_dict[tic]
init_values = tic_data['init_values']
acts = tic_data['actions']
prices = tic_data['prices']
cs = np.cumsum(acts)
args_s = [i + 1 for i in range(len(cs) - 1) if cs[i + 1] < cs[i]]
# tic actions with no sales
if not args_s:
pnl = complete_calc_buyonly(acts, prices, init_values)
pnl_all[tic] = pnl
continue
# copy acts: acts_rev will be revised based on closing/reducing init positions
pnl_all = execute_position_sales(tic,acts,prices,args_s,pnl_all)
return pnl_all
def complete_calc_buyonly(actions, prices, init_values):
# calculate final pnl for each ticker assuming no sales
fnl_price = prices[-1]
final_values = np.multiply(fnl_price, actions)
pnl = np.subtract(final_values, init_values)
return pnl
def execute_position_sales(tic,acts,prices,args_s,pnl_all):
# calculate final pnl for each ticker with sales
pnl_t = []
acts_rev = acts.copy()
# location of sales transactions
for s in args_s: # s is scaler
# price_s = [prices[s]]
act_s = [acts_rev[s]]
args_b = [i for i in range(s) if acts_rev[i] > 0]
prcs_init_trades = prices[args_b]
acts_init_trades = acts_rev[args_b]
# update actions for sales
# reduce/eliminate init values through trades
# always start with earliest purchase that has not been closed through sale
# selectors for purchase and sales trades
# find earliest remaining purchase
arg_sel = min(args_b)
# sel_s = len(acts_trades) - 1
# closing part/all of earliest init trade not yet closed
# sales actions are negative
# in this test case, abs_val of init and sales share counts are same
# zero-out sales actions
# market value of sale
# max number of shares to be closed: may be less than # originally purchased
acts_shares = min(abs(act_s.pop()), acts_rev[arg_sel])
# mv of shares when purchased
mv_p = abs(acts_shares * prices[arg_sel])
# mv of sold shares
mv_s = abs(acts_shares * prices[s])
# calc pnl
pnl = mv_s - mv_p
# reduce init share count
# close all/part of init purchase
acts_rev[arg_sel] -= acts_shares
acts_rev[s] += acts_shares
# calculate pnl for trade
# value of associated purchase
# find earliest non-zero positive act in acts_revs
pnl_t.append(pnl)
pnl_op = calc_pnl_for_open_positions(acts_rev, prices)
#pnl_op is list
# add pnl_op results (if any) to pnl_t (both lists)
pnl_t.extend(pnl_op)
#print(f'Total pnl for {tic}: {np.sum(pnl_t)}')
pnl_all[tic] = np.array(pnl_t)
return pnl_all
def calc_pnl_for_open_positions(acts,prices):
# identify any positive share values after accounting for sales
pnl = []
fp = prices[-1] # last price
open_pos_arg = np.argwhere(acts>0)
if len(open_pos_arg)==0:return pnl # no open positions
mkt_vals_open = np.multiply(acts[open_pos_arg], prices[open_pos_arg])
# mkt val at end of testing period
# treat as trades for purposes of calculating pnl at end of testing period
mkt_vals_final = np.multiply(fp, acts[open_pos_arg])
pnl_a = np.subtract(mkt_vals_final, mkt_vals_open)
#convert to list
pnl = [i[0] for i in pnl_a.tolist()]
#print(f'Market value of open positions at end of testing {pnl}')
return pnl
def calc_trade_perf(pnl_d):
# calculate trade performance metrics
perf_results = {}
for t,pnl in pnl_d.items():
wins = pnl[pnl>0] # total val
losses = pnl[pnl<0]
n_wins = len(wins)
n_losses = len(losses)
n_trades = n_wins + n_losses
wins_val = np.sum(wins)
losses_val = np.sum(losses)
wins_avg = 0 if n_wins==0 else np.mean(wins)
#print(f'{t} n_wins: {n_wins} n_losses: {n_losses}')
losses_avg = 0 if n_losses==0 else np.mean(losses)
d = {'# trades':n_trades,'# wins':n_wins,'# losses':n_losses,
'wins total value':wins_val, 'wins avg value':wins_avg,
'losses total value':losses_val, 'losses avg value':losses_avg,}
perf_results[t] = d
return perf_results
###Output
_____no_output_____
###Markdown
TUNING HYPERPARAMETERS USING OPTUNA1. Go to this [link](https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/utils/hyperparams_opt.py), you will find all possible hyperparamters to tune for all the models.2. For your model, grab those hyperparamters which you want to optimize and then return a dictionary of hyperparamters.3. There is a feature in Optuna called as hyperparamters importance, you can point out those hyperparamters which are important for tuning.4. By default Optuna use [TPESampler](https://www.youtube.com/watch?v=tdwgR1AqQ8Y) for sampling hyperparameters from the search space.
###Code
def sample_ddpg_params(trial:optuna.Trial):
# Size of the replay buffer
buffer_size = trial.suggest_categorical("buffer_size", [int(1e4), int(1e5), int(1e6)])
learning_rate = trial.suggest_loguniform("learning_rate", 1e-5, 1)
batch_size = trial.suggest_categorical("batch_size", [32, 64, 128, 256, 512])
return {"buffer_size": buffer_size,
"learning_rate":learning_rate,
"batch_size":batch_size}
###Output
_____no_output_____
###Markdown
*OPTIONAL CODE FOR SAMPLING HYPERPARAMETERS*Replace current call in function *objective* with `hyperparameters = sample_ddpg_params_all(trial)`
###Code
def sample_ddpg_params_all(trial:optuna.Trial,
# fixed values from previous study
learning_rate=0.0103,
batch_size=128,
buffer_size=int(1e6)):
gamma = trial.suggest_categorical("gamma", [0.94, 0.96, 0.98])
# Polyak coeff
tau = trial.suggest_categorical("tau", [0.08, 0.1, 0.12])
train_freq = trial.suggest_categorical("train_freq", [512,768,1024])
gradient_steps = train_freq
noise_type = trial.suggest_categorical("noise_type", ["ornstein-uhlenbeck", "normal", None])
noise_std = trial.suggest_categorical("noise_std", [.1,.2,.3] )
# NOTE: Add "verybig" to net_arch when tuning HER (see TD3)
net_arch = trial.suggest_categorical("net_arch", ["small", "big"])
# activation_fn = trial.suggest_categorical('activation_fn', [nn.Tanh, nn.ReLU, nn.ELU, nn.LeakyReLU])
net_arch = {
"small": [64, 64],
"medium": [256, 256],
"big": [512, 512],
}[net_arch]
hyperparams = {
"batch_size": batch_size,
"buffer_size": buffer_size,
"gamma": gamma,
"gradient_steps": gradient_steps,
"learning_rate": learning_rate,
"tau": tau,
"train_freq": train_freq,
#"noise_std": noise_std,
#"noise_type": noise_type,
"policy_kwargs": dict(net_arch=net_arch)
}
return hyperparams
###Output
_____no_output_____
###Markdown
CALLBACKS1. The callback will terminate if the improvement margin is below certain point2. It will terminate after certain number of trial_number are reached, not before that3. It will hold its patience to reach the threshold
###Code
class LoggingCallback:
def __init__(self,threshold,trial_number,patience):
'''
threshold:int tolerance for increase in objective
trial_number: int Prune after minimum number of trials
patience: int patience for the threshold
'''
self.threshold = threshold
self.trial_number = trial_number
self.patience = patience
print(f'Callback threshold {self.threshold}, \
trial_number {self.trial_number}, \
patience {self.patience}')
self.cb_list = [] #Trials list for which threshold is reached
def __call__(self,study:optuna.study, frozen_trial:optuna.Trial):
#Setting the best value in the current trial
study.set_user_attr("previous_best_value", study.best_value)
#Checking if the minimum number of trials have pass
if frozen_trial.number >self.trial_number:
previous_best_value = study.user_attrs.get("previous_best_value",None)
#Checking if the previous and current objective values have the same sign
if previous_best_value * study.best_value >=0:
#Checking for the threshold condition
if abs(previous_best_value-study.best_value) < self.threshold:
self.cb_list.append(frozen_trial.number)
#If threshold is achieved for the patience amount of time
if len(self.cb_list)>self.patience:
print('The study stops now...')
print('With number',frozen_trial.number ,'and value ',frozen_trial.value)
print('The previous and current best values are {} and {} respectively'
.format(previous_best_value, study.best_value))
study.stop()
from IPython.display import clear_output
import sys
os.makedirs("models",exist_ok=True)
def objective(trial:optuna.Trial):
#Trial will suggest a set of hyperparamters from the specified range
# Optional to optimize larger set of parameters
# hyperparameters = sample_ddpg_params_all(trial)
# Optimize buffer size, batch size, learning rate
hyperparameters = sample_ddpg_params(trial)
#print(f'Hyperparameters from objective: {hyperparameters.keys()}')
policy_kwargs = None # default
if 'policy_kwargs' in hyperparameters.keys():
policy_kwargs = hyperparameters['policy_kwargs']
del hyperparameters['policy_kwargs']
#print(f'Policy keyword arguments {policy_kwargs}')
model_ddpg = agent.get_model("ddpg",
policy_kwargs = policy_kwargs,
model_kwargs = hyperparameters )
#You can increase it for better comparison
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name="ddpg",
total_timesteps=total_timesteps)
trained_ddpg.save('models/ddpg_{}.pth'.format(trial.number))
clear_output(wait=True)
#For the given hyperparamters, determine the account value in the trading period
df_account_value, df_actions = DRLAgent.DRL_prediction(
model=trained_ddpg,
environment = e_trade_gym)
# Calculate trade performance metric
# Currently ratio of average win and loss market values
tpm = calc_trade_perf_metric(df_actions,trade,tp_metric)
return tpm
#Create a study object and specify the direction as 'maximize'
#As you want to maximize sharpe
#Pruner stops not promising iterations
#Use a pruner, else you will get error related to divergence of model
#You can also use Multivariate samplere
#sampler = optuna.samplers.TPESampler(multivarite=True,seed=42)
sampler = optuna.samplers.TPESampler()
study = optuna.create_study(study_name="ddpg_study",direction='maximize',
sampler = sampler, pruner=optuna.pruners.HyperbandPruner())
logging_callback = LoggingCallback(threshold=lc_threshold,
patience=lc_patience,
trial_number=lc_trial_number)
#You can increase the n_trials for a better search space scanning
study.optimize(objective, n_trials=n_trials,catch=(ValueError,),callbacks=[logging_callback])
joblib.dump(study, "final_ddpg_study__.pkl")
#Get the best hyperparamters
print('Hyperparameters after tuning',study.best_params)
print('Hyperparameters before tuning',config.DDPG_PARAMS)
study.best_trial
from stable_baselines3 import DDPG
tuned_model_ddpg = DDPG.load('models/ddpg_{}.pth'.format(study.best_trial.number),env=env_train)
#Trading period account value with tuned model
df_account_value_tuned, df_actions_tuned = DRLAgent.DRL_prediction(
model=tuned_model_ddpg,
environment = e_trade_gym)
def add_trade_perf_metric(df_actions,
perf_stats_all,
trade,
tp_metric):
tpm = calc_trade_perf_metric(df_actions,trade,tp_metric)
trp_metric = {'Value':tpm}
df2 = pd.DataFrame(trp_metric,index=['Trade_Perf'])
perf_stats_all = perf_stats_all.append(df2)
return perf_stats_all
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
df_actions_tuned.to_csv("./"+config.RESULTS_DIR+"/tuned_actions_" +now+ '.csv')
#Backtesting with our pruned model
print("==============Get Backtest Results===========")
print("==============Pruned Model===========")
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
perf_stats_all_tuned = backtest_stats(account_value=df_account_value_tuned)
perf_stats_all_tuned = pd.DataFrame(perf_stats_all_tuned)
perf_stats_all_tuned.columns = ['Value']
# add trade performance metric
perf_stats_all_tuned = \
add_trade_perf_metric(df_actions_tuned,
perf_stats_all_tuned,
trade,
tp_metric)
perf_stats_all_tuned.to_csv("./"+config.RESULTS_DIR+"/perf_stats_all_tuned_"+now+'.csv')
#Now train with not tuned hyperaparameters
#Default config.ddpg_PARAMS
non_tuned_model_ddpg = agent.get_model("ddpg",model_kwargs = config.DDPG_PARAMS )
trained_ddpg = agent.train_model(model=non_tuned_model_ddpg,
tb_log_name='ddpg',
total_timesteps=total_timesteps)
df_account_value, df_actions = DRLAgent.DRL_prediction(
model=trained_ddpg,
environment = e_trade_gym)
#Backtesting for not tuned hyperparamters
print("==============Get Backtest Results===========")
print("============Default Hyperparameters==========")
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
perf_stats_all = backtest_stats(account_value=df_account_value)
perf_stats_all = pd.DataFrame(perf_stats_all)
perf_stats_all.columns = ['Value']
# add trade performance metric
perf_stats_all = add_trade_perf_metric(df_actions,
perf_stats_all,
trade,
tp_metric)
perf_stats_all.to_csv("./"+config.RESULTS_DIR+"/perf_stats_all_"+now+'.csv')
#Certainly you can afford more number of trials for further optimization
from optuna.visualization import plot_optimization_history
fig = plot_optimization_history(study)
#"./"+config.RESULTS_DIR+
fig.write_image("./"+config.RESULTS_DIR+"/opt_hist.png")
fig.show()
from optuna.visualization import plot_contour
from optuna.visualization import plot_edf
from optuna.visualization import plot_intermediate_values
from optuna.visualization import plot_optimization_history
from optuna.visualization import plot_parallel_coordinate
from optuna.visualization import plot_param_importances
from optuna.visualization import plot_slice
#Hyperparamters importance
try:
fig = plot_param_importances(study)
fig.write_image("./"+config.RESULTS_DIR+"/params_importances.png")
fig.show()
except:
print('Cannot calculate hyperparameter importances: no variation')
fig = plot_edf(study)
fig.write_image("./"+config.RESULTS_DIR+"/emp_dist_func.png")
fig.show()
files.download('/content/final_ddpg_study__.pkl')
###Output
_____no_output_____
###Markdown
Introduction1. This tutorial introduces *trade-based metrics* for hyperparameter optimization of FinRL models.2. As the name implies, trade-based metrics are associated with the trade activity that FinRL captures in its actions tables. In general, a trade is represented by an entry in an actions file.2. Such metrics include counts of winning and losing trades, total value of wins and losses and ratio of average market value of wins to losses.1. In this tutorial, we will be tuning hyperparameters for Stable baselines3 models using Optuna.2. The default model hyperparameters may not be adequate for your custom portfolio or custom state-space. Reinforcement learning algorithms are sensitive to hyperparameters, hence tuning is an important step.3. Hyperparamters are tuned based on an objective, which needs to be maximized or minimized. ***In this tutorial, the ratio of average winning to losing trade value is used as the objective.*** This ratio is to be ***maximized***.3. This tutorial incorporates a multi-stock framework based on the 30 stocks (aka tickers) in the DOW JONES Industrial Average. Trade metrics are calculated for each ticker and then aggregated.7.**IMPORTANT**: While the DOW stocks represent a portfolio, portfolio optimization techniques, such as the classic Markowitz mean-variance model, are not applied in this analysis. Other FinRL tutorials and examples demonstrate portfolio optimization.
###Code
#Installing FinRL
# Set colab status to trigger installs
clb = True
print(f'Preparing for colab: {clb}')
pkgs = ['FinRL', 'optuna', 'Ray/rllib','plotly','ipywidgets']
if clb:
print(f'Installing packages: {pkgs}')
# Set Variables
## Fixed
tpm_hist = {} # record tp metric values for trials
tp_metric = 'avgwl' # specified trade_param_metric: ratio avg value win/loss
## Settable by User
n_trials = 5 # number of HP optimization runs
total_timesteps = 2000 # per HP optimization run
## Logging callback params
lc_threshold=1e-5
lc_patience=15
lc_trial_number=5
%%capture
if clb:
# installing packages
!pip install pyfolio-reloaded #original pyfolio no longer maintained
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
!pip install optuna
!pip install -U "ray[rllib]"
!pip install plotly
!pip install ipywidgets
!pip install -U kaleido # enables saving plots to file
#Importing the libraries
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# matplotlib.use('Agg')
import datetime
import optuna
from pathlib import Path
from google.colab import files
%matplotlib inline
from finrl import config
from finrl import config_tickers
from optuna.integration import PyTorchLightningPruningCallback
from finrl.finrl_meta.preprocessor.yahoodownloader import YahooDownloader
from finrl.finrl_meta.preprocessor.preprocessors import FeatureEngineer, data_split
from finrl.finrl_meta.env_stock_trading.env_stocktrading import StockTradingEnv
from finrl.finrl_meta.env_stock_trading.env_stocktrading_np import StockTradingEnv as StockTradingEnv_numpy
from finrl.agents.stablebaselines3.models import DRLAgent
from finrl.agents.rllib.models import DRLAgent as DRLAgent_rllib
from finrl.finrl_meta.data_processor import DataProcessor
import joblib
from finrl.plot import backtest_stats, backtest_plot, get_daily_return, get_baseline
import ray
from pprint import pprint
import kaleido
import sys
sys.path.append("../FinRL-Library")
import itertools
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
print(f'Torch device: {device}')
###Output
_____no_output_____
###Markdown
Zipline was developed by Quantopian, which also created pyfolio. The latter is used in FinRL to calculate and display backtest results. Despite the unavailability of zipline, as reported above, pyfolio remains operational. See [here](https://github.com/quantopian/pyfolio/issues/654) for more information.
###Code
## Connect to GPU for faster processing
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Not connected to a GPU')
else:
print(gpu_info)
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
Collecting data and preprocessing1. Load DOW 30 prices2. Add technical indicators3. Create *processed_full*, the final data set for training and testingTo save time in multiple runs, if the processed_full file is available, it is read from a previously saved csv file.
###Code
#Custom ticker list dataframe download
#TODO save df to avoid download
path_pf = '/content/ticker_data.csv'
if Path(path_pf).is_file():
print('Reading ticker data')
df = pd.read_csv(path_pf)
else:
print('Downloading ticker data')
ticker_list = config_tickers.DOW_30_TICKER
df = YahooDownloader(start_date = '2009-01-01',
end_date = '2021-10-01',
ticker_list = ticker_list).fetch_data()
df.to_csv('ticker_data.csv')
def create_processed_full(processed):
list_ticker = processed["tic"].unique().tolist()
list_date = list(pd.date_range(processed['date'].min(),processed['date'].max()).astype(str))
combination = list(itertools.product(list_date,list_ticker))
processed_full = pd.DataFrame(combination,columns=["date","tic"]).merge(processed,on=["date","tic"],how="left")
processed_full = processed_full[processed_full['date'].isin(processed['date'])]
processed_full = processed_full.sort_values(['date','tic'])
processed_full = processed_full.fillna(0)
processed_full.sort_values(['date','tic'],ignore_index=True).head(5)
processed_full.to_csv('processed_full.csv')
return processed_full
#You can add technical indicators and turbulence factor to dataframe
#Just set the use_technical_indicator=True, use_vix=True and use_turbulence=True
def create_techind():
fe = FeatureEngineer(
use_technical_indicator=True,
tech_indicator_list = config.INDICATORS,
use_vix=True,
use_turbulence=True,
user_defined_feature = False)
processed = fe.preprocess_data(df)
return processed
#Load price and technical indicator data from file if available
path_pf = '/content/processed_full.csv'
if Path(path_pf).is_file():
print('Reading processed_full data')
processed_full = pd.read_csv(path_pf)
else:
print('Creating processed_full file')
processed=create_techind()
processed_full=create_processed_full(processed)
train = data_split(processed_full, '2009-01-01','2020-07-01')
trade = data_split(processed_full, '2020-05-01','2021-10-01')
print(f'Number of training samples: {len(train)}')
print(f'Number of testing samples: {len(trade)}')
stock_dimension = len(train.tic.unique())
state_space = 1 + 2*stock_dimension + len(config.INDICATORS) * stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
#Defining the environment kwargs
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"buy_cost_pct": 0.001,
"sell_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.INDICATORS,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
#Instantiate the training gym compatible environment
e_train_gym = StockTradingEnv(df = train, **env_kwargs)
#Instantiate the training environment
# Also instantiate our training gent
env_train, _ = e_train_gym.get_sb_env()
#print(type(env_train))
agent = DRLAgent(env = env_train)
#Instantiate the trading environment
e_trade_gym = StockTradingEnv(df = trade, turbulence_threshold = None, **env_kwargs)
###Output
_____no_output_____
###Markdown
Trade performance codeThe following code calculates trade performance metrics, which are then used as an objective for optimizing hyperparameter values. There are several available metrics. In this tutorial, the default choice is the ratio of average value of winning to losing trades.
###Code
#Main method
# Calculates Trade Performance for Objective
# Called from objective method
# Returns selected trade perf metric(s)
# Requires actions and associated prices
def calc_trade_perf_metric(df_actions,
df_prices_trade,
tp_metric,
dbg=False):
df_actions_p, df_prices_p, tics = prep_data(df_actions.copy(),
df_prices_trade.copy())
# actions predicted by trained model on trade data
df_actions_p.to_csv('df_actions.csv')
# Confirms that actions, prices and tics are consistent
df_actions_s, df_prices_s, tics_prtfl = \
sync_tickers(df_actions_p.copy(),df_prices_p.copy(),tics)
# copy to ensure that tics from portfolio remains unchanged
tics = tics_prtfl.copy()
# Analysis is performed on each portfolio ticker
perf_data= collect_performance_data(df_actions_s, df_prices_s, tics)
# profit/loss for each ticker
pnl_all = calc_pnl_all(perf_data, tics)
# values for trade performance metrics
perf_results = calc_trade_perf(pnl_all)
df = pd.DataFrame.from_dict(perf_results, orient='index')
# calculate and return trade metric value as objective
m = calc_trade_metric(df,tp_metric)
print(f'Ratio Avg Win/Avg Loss: {m}')
k = str(len(tpm_hist)+1)
# save metric value
tpm_hist[k] = m
return m
# Supporting methods
def calc_trade_metric(df,metric='avgwl'):
'''# trades', '# wins', '# losses', 'wins total value', 'wins avg value',
'losses total value', 'losses avg value'''
# For this tutorial, the only metric available is the ratio of
# average values of winning to losing trades. Others are in development.
# some test cases produce no losing trades.
# The code below assigns a value as a multiple of the highest value during
# previous hp optimization runs. If the first run experiences no losses,
# a fixed value is assigned for the ratio
tpm_mult = 1.0
avgwl_no_losses = 25
if metric == 'avgwl':
if sum(df['# losses']) == 0:
try:
return max(tpm_hist.values())*tpm_mult
except ValueError:
return avgwl_no_losses
avg_w = sum(df['wins total value'])/sum(df['# wins'])
avg_l = sum(df['losses total value'])/sum(df['# losses'])
m = abs(avg_w/avg_l)
return m
def prep_data(df_actions,
df_prices_trade):
df=df_prices_trade[['date','close','tic']]
df['Date'] = pd.to_datetime(df['date'])
df = df.set_index('Date')
# set indices on both df to datetime
idx = pd.to_datetime(df_actions.index, infer_datetime_format=True)
df_actions.index=idx
tics = np.unique(df.tic)
n_tics = len(tics)
print(f'Number of tickers: {n_tics}')
print(f'Tickers: {tics}')
dategr = df.groupby('tic')
p_d={t:dategr.get_group(t).loc[:,'close'] for t in tics}
df_prices = pd.DataFrame.from_dict(p_d)
df_prices.index = df_prices.index.normalize()
return df_actions, df_prices, tics
# prepares for integrating action and price files
def link_prices_actions(df_a,
df_p):
cols_a = [t + '_a' for t in df_a.columns]
df_a.columns = cols_a
cols_p = [t + '_p' for t in df_p.columns]
df_p.columns = cols_p
return df_a, df_p
def sync_tickers(df_actions,df_tickers_p,tickers):
# Some DOW30 components may not be included in portfolio
# passed tickers includes all DOW30 components
# actions and ticker files may have different length indices
if len(df_actions) != len(df_tickers_p):
msng_dates = set(df_actions.index)^set(df_tickers_p.index)
try:
#assumption is prices has one additional timestamp (row)
df_tickers_p.drop(msng_dates,inplace=True)
except:
df_actions.drop(msng_dates,inplace=True)
df_actions, df_tickers_p = link_prices_actions(df_actions,df_tickers_p)
# identify any DOW components not in portfolio
t_not_in_a = [t for t in tickers if t + '_a' not in list(df_actions.columns)]
# remove t_not_in_a from df_tickers_p
drop_cols = [t + '_p' for t in t_not_in_a]
df_tickers_p.drop(columns=drop_cols,inplace=True)
# Tickers in portfolio
tickers_prtfl = [c.split('_')[0] for c in df_actions.columns]
return df_actions,df_tickers_p, tickers_prtfl
def collect_performance_data(dfa,dfp,tics, dbg=False):
perf_data = {}
# In current version, files columns include secondary identifier
for t in tics:
# actions: purchase/sale of DOW equities
acts = dfa['_'.join([t,'a'])].values
# ticker prices
prices = dfp['_'.join([t,'p'])].values
# market value of purchases/sales
tvals_init = np.multiply(acts,prices)
d={'actions':acts, 'prices':prices,'init_values':tvals_init}
perf_data[t]=d
return perf_data
def calc_pnl_all(perf_dict, tics_all):
# calculate profit/loss for each ticker
print(f'Calculating profit/loss for each ticker')
pnl_all = {}
for tic in tics_all:
pnl_t = []
tic_data = perf_dict[tic]
init_values = tic_data['init_values']
acts = tic_data['actions']
prices = tic_data['prices']
cs = np.cumsum(acts)
args_s = [i + 1 for i in range(len(cs) - 1) if cs[i + 1] < cs[i]]
# tic actions with no sales
if not args_s:
pnl = complete_calc_buyonly(acts, prices, init_values)
pnl_all[tic] = pnl
continue
# copy acts: acts_rev will be revised based on closing/reducing init positions
pnl_all = execute_position_sales(tic,acts,prices,args_s,pnl_all)
return pnl_all
def complete_calc_buyonly(actions, prices, init_values):
# calculate final pnl for each ticker assuming no sales
fnl_price = prices[-1]
final_values = np.multiply(fnl_price, actions)
pnl = np.subtract(final_values, init_values)
return pnl
def execute_position_sales(tic,acts,prices,args_s,pnl_all):
# calculate final pnl for each ticker with sales
pnl_t = []
acts_rev = acts.copy()
# location of sales transactions
for s in args_s: # s is scaler
# price_s = [prices[s]]
act_s = [acts_rev[s]]
args_b = [i for i in range(s) if acts_rev[i] > 0]
prcs_init_trades = prices[args_b]
acts_init_trades = acts_rev[args_b]
# update actions for sales
# reduce/eliminate init values through trades
# always start with earliest purchase that has not been closed through sale
# selectors for purchase and sales trades
# find earliest remaining purchase
arg_sel = min(args_b)
# sel_s = len(acts_trades) - 1
# closing part/all of earliest init trade not yet closed
# sales actions are negative
# in this test case, abs_val of init and sales share counts are same
# zero-out sales actions
# market value of sale
# max number of shares to be closed: may be less than # originally purchased
acts_shares = min(abs(act_s.pop()), acts_rev[arg_sel])
# mv of shares when purchased
mv_p = abs(acts_shares * prices[arg_sel])
# mv of sold shares
mv_s = abs(acts_shares * prices[s])
# calc pnl
pnl = mv_s - mv_p
# reduce init share count
# close all/part of init purchase
acts_rev[arg_sel] -= acts_shares
acts_rev[s] += acts_shares
# calculate pnl for trade
# value of associated purchase
# find earliest non-zero positive act in acts_revs
pnl_t.append(pnl)
pnl_op = calc_pnl_for_open_positions(acts_rev, prices)
#pnl_op is list
# add pnl_op results (if any) to pnl_t (both lists)
pnl_t.extend(pnl_op)
#print(f'Total pnl for {tic}: {np.sum(pnl_t)}')
pnl_all[tic] = np.array(pnl_t)
return pnl_all
def calc_pnl_for_open_positions(acts,prices):
# identify any positive share values after accounting for sales
pnl = []
fp = prices[-1] # last price
open_pos_arg = np.argwhere(acts>0)
if len(open_pos_arg)==0:return pnl # no open positions
mkt_vals_open = np.multiply(acts[open_pos_arg], prices[open_pos_arg])
# mkt val at end of testing period
# treat as trades for purposes of calculating pnl at end of testing period
mkt_vals_final = np.multiply(fp, acts[open_pos_arg])
pnl_a = np.subtract(mkt_vals_final, mkt_vals_open)
#convert to list
pnl = [i[0] for i in pnl_a.tolist()]
#print(f'Market value of open positions at end of testing {pnl}')
return pnl
def calc_trade_perf(pnl_d):
# calculate trade performance metrics
perf_results = {}
for t,pnl in pnl_d.items():
wins = pnl[pnl>0] # total val
losses = pnl[pnl<0]
n_wins = len(wins)
n_losses = len(losses)
n_trades = n_wins + n_losses
wins_val = np.sum(wins)
losses_val = np.sum(losses)
wins_avg = 0 if n_wins==0 else np.mean(wins)
#print(f'{t} n_wins: {n_wins} n_losses: {n_losses}')
losses_avg = 0 if n_losses==0 else np.mean(losses)
d = {'# trades':n_trades,'# wins':n_wins,'# losses':n_losses,
'wins total value':wins_val, 'wins avg value':wins_avg,
'losses total value':losses_val, 'losses avg value':losses_avg,}
perf_results[t] = d
return perf_results
###Output
_____no_output_____
###Markdown
Tuning hyperparameters using Optuna1. Go to this [link](https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/utils/hyperparams_opt.py), you will find all possible hyperparamters to tune for all the models.2. For your model, grab those hyperparamters which you want to optimize and then return a dictionary of hyperparamters.3. There is a feature in Optuna called as hyperparamters importance, you can point out those hyperparamters which are important for tuning.4. By default Optuna use [TPESampler](https://www.youtube.com/watch?v=tdwgR1AqQ8Y) for sampling hyperparameters from the search space.
###Code
def sample_ddpg_params(trial:optuna.Trial):
# Size of the replay buffer
buffer_size = trial.suggest_categorical("buffer_size", [int(1e4), int(1e5), int(1e6)])
learning_rate = trial.suggest_loguniform("learning_rate", 1e-5, 1)
batch_size = trial.suggest_categorical("batch_size", [32, 64, 128, 256, 512])
return {"buffer_size": buffer_size,
"learning_rate":learning_rate,
"batch_size":batch_size}
###Output
_____no_output_____
###Markdown
*OPTIONAL CODE FOR SAMPLING HYPERPARAMETERS*Replace current call in function *objective* with `hyperparameters = sample_ddpg_params_all(trial)`
###Code
def sample_ddpg_params_all(trial:optuna.Trial,
# fixed values from previous study
learning_rate=0.0103,
batch_size=128,
buffer_size=int(1e6)):
gamma = trial.suggest_categorical("gamma", [0.94, 0.96, 0.98])
# Polyak coeff
tau = trial.suggest_categorical("tau", [0.08, 0.1, 0.12])
train_freq = trial.suggest_categorical("train_freq", [512,768,1024])
gradient_steps = train_freq
noise_type = trial.suggest_categorical("noise_type", ["ornstein-uhlenbeck", "normal", None])
noise_std = trial.suggest_categorical("noise_std", [.1,.2,.3] )
# NOTE: Add "verybig" to net_arch when tuning HER (see TD3)
net_arch = trial.suggest_categorical("net_arch", ["small", "big"])
# activation_fn = trial.suggest_categorical('activation_fn', [nn.Tanh, nn.ReLU, nn.ELU, nn.LeakyReLU])
net_arch = {
"small": [64, 64],
"medium": [256, 256],
"big": [512, 512],
}[net_arch]
hyperparams = {
"batch_size": batch_size,
"buffer_size": buffer_size,
"gamma": gamma,
"gradient_steps": gradient_steps,
"learning_rate": learning_rate,
"tau": tau,
"train_freq": train_freq,
#"noise_std": noise_std,
#"noise_type": noise_type,
"policy_kwargs": dict(net_arch=net_arch)
}
return hyperparams
###Output
_____no_output_____
###Markdown
Callbacks1. The callback will terminate if the improvement margin is below certain point2. It will terminate after certain number of trial_number are reached, not before that3. It will hold its patience to reach the threshold
###Code
class LoggingCallback:
def __init__(self,threshold,trial_number,patience):
'''
threshold:int tolerance for increase in objective
trial_number: int Prune after minimum number of trials
patience: int patience for the threshold
'''
self.threshold = threshold
self.trial_number = trial_number
self.patience = patience
print(f'Callback threshold {self.threshold}, \
trial_number {self.trial_number}, \
patience {self.patience}')
self.cb_list = [] #Trials list for which threshold is reached
def __call__(self,study:optuna.study, frozen_trial:optuna.Trial):
#Setting the best value in the current trial
study.set_user_attr("previous_best_value", study.best_value)
#Checking if the minimum number of trials have pass
if frozen_trial.number >self.trial_number:
previous_best_value = study.user_attrs.get("previous_best_value",None)
#Checking if the previous and current objective values have the same sign
if previous_best_value * study.best_value >=0:
#Checking for the threshold condition
if abs(previous_best_value-study.best_value) < self.threshold:
self.cb_list.append(frozen_trial.number)
#If threshold is achieved for the patience amount of time
if len(self.cb_list)>self.patience:
print('The study stops now...')
print('With number',frozen_trial.number ,'and value ',frozen_trial.value)
print('The previous and current best values are {} and {} respectively'
.format(previous_best_value, study.best_value))
study.stop()
from IPython.display import clear_output
import sys
os.makedirs("models",exist_ok=True)
def objective(trial:optuna.Trial):
#Trial will suggest a set of hyperparamters from the specified range
# Optional to optimize larger set of parameters
# hyperparameters = sample_ddpg_params_all(trial)
# Optimize buffer size, batch size, learning rate
hyperparameters = sample_ddpg_params(trial)
#print(f'Hyperparameters from objective: {hyperparameters.keys()}')
policy_kwargs = None # default
if 'policy_kwargs' in hyperparameters.keys():
policy_kwargs = hyperparameters['policy_kwargs']
del hyperparameters['policy_kwargs']
#print(f'Policy keyword arguments {policy_kwargs}')
model_ddpg = agent.get_model("ddpg",
policy_kwargs = policy_kwargs,
model_kwargs = hyperparameters )
#You can increase it for better comparison
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name="ddpg",
total_timesteps=total_timesteps)
trained_ddpg.save('models/ddpg_{}.pth'.format(trial.number))
clear_output(wait=True)
#For the given hyperparamters, determine the account value in the trading period
df_account_value, df_actions = DRLAgent.DRL_prediction(
model=trained_ddpg,
environment = e_trade_gym)
# Calculate trade performance metric
# Currently ratio of average win and loss market values
tpm = calc_trade_perf_metric(df_actions,trade,tp_metric)
return tpm
#Create a study object and specify the direction as 'maximize'
#As you want to maximize sharpe
#Pruner stops not promising iterations
#Use a pruner, else you will get error related to divergence of model
#You can also use Multivariate samplere
#sampler = optuna.samplers.TPESampler(multivarite=True,seed=42)
sampler = optuna.samplers.TPESampler()
study = optuna.create_study(study_name="ddpg_study",direction='maximize',
sampler = sampler, pruner=optuna.pruners.HyperbandPruner())
logging_callback = LoggingCallback(threshold=lc_threshold,
patience=lc_patience,
trial_number=lc_trial_number)
#You can increase the n_trials for a better search space scanning
study.optimize(objective, n_trials=n_trials,catch=(ValueError,),callbacks=[logging_callback])
joblib.dump(study, "final_ddpg_study__.pkl")
#Get the best hyperparamters
print('Hyperparameters after tuning',study.best_params)
print('Hyperparameters before tuning',config.DDPG_PARAMS)
study.best_trial
from stable_baselines3 import DDPG
tuned_model_ddpg = DDPG.load('models/ddpg_{}.pth'.format(study.best_trial.number),env=env_train)
#Trading period account value with tuned model
df_account_value_tuned, df_actions_tuned = DRLAgent.DRL_prediction(
model=tuned_model_ddpg,
environment = e_trade_gym)
def add_trade_perf_metric(df_actions,
perf_stats_all,
trade,
tp_metric):
tpm = calc_trade_perf_metric(df_actions,trade,tp_metric)
trp_metric = {'Value':tpm}
df2 = pd.DataFrame(trp_metric,index=['Trade_Perf'])
perf_stats_all = perf_stats_all.append(df2)
return perf_stats_all
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
df_actions_tuned.to_csv("./"+config.RESULTS_DIR+"/tuned_actions_" +now+ '.csv')
#Backtesting with our pruned model
print("==============Get Backtest Results===========")
print("==============Pruned Model===========")
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
perf_stats_all_tuned = backtest_stats(account_value=df_account_value_tuned)
perf_stats_all_tuned = pd.DataFrame(perf_stats_all_tuned)
perf_stats_all_tuned.columns = ['Value']
# add trade performance metric
perf_stats_all_tuned = \
add_trade_perf_metric(df_actions_tuned,
perf_stats_all_tuned,
trade,
tp_metric)
perf_stats_all_tuned.to_csv("./"+config.RESULTS_DIR+"/perf_stats_all_tuned_"+now+'.csv')
#Now train with not tuned hyperaparameters
#Default config.ddpg_PARAMS
non_tuned_model_ddpg = agent.get_model("ddpg",model_kwargs = config.DDPG_PARAMS )
trained_ddpg = agent.train_model(model=non_tuned_model_ddpg,
tb_log_name='ddpg',
total_timesteps=total_timesteps)
df_account_value, df_actions = DRLAgent.DRL_prediction(
model=trained_ddpg,
environment = e_trade_gym)
#Backtesting for not tuned hyperparamters
print("==============Get Backtest Results===========")
print("============Default Hyperparameters==========")
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
perf_stats_all = backtest_stats(account_value=df_account_value)
perf_stats_all = pd.DataFrame(perf_stats_all)
perf_stats_all.columns = ['Value']
# add trade performance metric
perf_stats_all = add_trade_perf_metric(df_actions,
perf_stats_all,
trade,
tp_metric)
perf_stats_all.to_csv("./"+config.RESULTS_DIR+"/perf_stats_all_"+now+'.csv')
#Certainly you can afford more number of trials for further optimization
from optuna.visualization import plot_optimization_history
fig = plot_optimization_history(study)
#"./"+config.RESULTS_DIR+
fig.write_image("./"+config.RESULTS_DIR+"/opt_hist.png")
fig.show()
from optuna.visualization import plot_contour
from optuna.visualization import plot_edf
from optuna.visualization import plot_intermediate_values
from optuna.visualization import plot_optimization_history
from optuna.visualization import plot_parallel_coordinate
from optuna.visualization import plot_param_importances
from optuna.visualization import plot_slice
#Hyperparamters importance
try:
fig = plot_param_importances(study)
fig.write_image("./"+config.RESULTS_DIR+"/params_importances.png")
fig.show()
except:
print('Cannot calculate hyperparameter importances: no variation')
fig = plot_edf(study)
fig.write_image("./"+config.RESULTS_DIR+"/emp_dist_func.png")
fig.show()
files.download('/content/final_ddpg_study__.pkl')
###Output
_____no_output_____
###Markdown
INTRODUCTION1. This tutorial introduces *trade-based metrics* for hyperparameter optimization of FinRL models.2. As the name implies, trade-based metrics are associated with the trade activity that FinRL captures in its actions tables. In general, a trade is represented by an entry in an actions file.2. Such metrics include counts of winning and losing trades, total value of wins and losses and ratio of average market value of wins to losses.1. In this tutorial, we will be tuning hyperparameters for Stable baselines3 models using Optuna.2. The default model hyperparameters may not be adequate for your custom portfolio or custom state-space. Reinforcement learning algorithms are sensitive to hyperparameters, hence tuning is an important step.3. Hyperparamters are tuned based on an objective, which needs to be maximized or minimized. ***In this tutorial, the ratio of average winning to losing trade value is used as the objective.*** This ratio is to be ***maximized***.3. This tutorial incorporates a multi-stock framework based on the 30 stocks (aka tickers) in the DOW JONES Industrial Average. Trade metrics are calculated for each ticker and then aggregated.7.**IMPORTANT**: While the DOW stocks represent a portfolio, portfolio optimization techniques, such as the classic Markowitz mean-variance model, are not applied in this analysis. Other FinRL tutorials and examples demonstrate portfolio optimization.
###Code
#Installing FinRL
# Set colab status to trigger installs
clb = True
print(f'Preparing for colab: {clb}')
pkgs = ['FinRL', 'optuna', 'Ray/rllib','plotly','ipywidgets']
if clb:
print(f'Installing packages: {pkgs}')
# Set Variables
## Fixed
tpm_hist = {} # record tp metric values for trials
tp_metric = 'avgwl' # specified trade_param_metric: ratio avg value win/loss
## Settable by User
n_trials = 5 # number of HP optimization runs
total_timesteps = 2000 # per HP optimization run
## Logging callback params
lc_threshold=1e-5
lc_patience=15
lc_trial_number=5
%%capture
if clb:
# installing packages
!pip install pyfolio-reloaded #original pyfolio no longer maintained
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
!pip install optuna
!pip install -U "ray[rllib]"
!pip install plotly
!pip install ipywidgets
!pip install -U kaleido # enables saving plots to file
#Importing the libraries
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# matplotlib.use('Agg')
import datetime
import optuna
from pathlib import Path
from google.colab import files
%matplotlib inline
from finrl.apps import config
from optuna.integration import PyTorchLightningPruningCallback
from finrl.finrl_meta.preprocessor.yahoodownloader import YahooDownloader
from finrl.finrl_meta.preprocessor.preprocessors import FeatureEngineer, data_split
from finrl.finrl_meta.env_stock_trading.env_stocktrading import StockTradingEnv
from finrl.finrl_meta.env_stock_trading.env_stocktrading_np import StockTradingEnv as StockTradingEnv_numpy
from finrl.drl_agents.stablebaselines3.models import DRLAgent
from finrl.drl_agents.rllib.models import DRLAgent as DRLAgent_rllib
from finrl.finrl_meta.data_processor import DataProcessor
import joblib
from finrl.plot import backtest_stats, backtest_plot, get_daily_return, get_baseline
import ray
from pprint import pprint
import kaleido
import sys
sys.path.append("../FinRL-Library")
import itertools
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
print(f'Torch device: {device}')
###Output
_____no_output_____
###Markdown
Zipline was developed by Quantopian, which also created pyfolio. The latter is used in FinRL to calculate and display backtest results. Despite the unavailability of zipline, as reported above, pyfolio remains operational. See [here](https://github.com/quantopian/pyfolio/issues/654) for more information.
###Code
## Connect to GPU for faster processing
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Not connected to a GPU')
else:
print(gpu_info)
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
COLLECTING DATA AND PREPROCESSING1. Load DOW 30 prices2. Add technical indicators3. Create *processed_full*, the final data set for training and testingTo save time in multiple runs, if the processed_full file is available, it is read from a previously saved csv file.
###Code
#Custom ticker list dataframe download
#TODO save df to avoid download
path_pf = '/content/ticker_data.csv'
if Path(path_pf).is_file():
print('Reading ticker data')
df = pd.read_csv(path_pf)
else:
print('Downloading ticker data')
ticker_list = config.DOW_30_TICKER
df = YahooDownloader(start_date = '2009-01-01',
end_date = '2021-10-01',
ticker_list = ticker_list).fetch_data()
df.to_csv('ticker_data.csv')
def create_processed_full(processed):
list_ticker = processed["tic"].unique().tolist()
list_date = list(pd.date_range(processed['date'].min(),processed['date'].max()).astype(str))
combination = list(itertools.product(list_date,list_ticker))
processed_full = pd.DataFrame(combination,columns=["date","tic"]).merge(processed,on=["date","tic"],how="left")
processed_full = processed_full[processed_full['date'].isin(processed['date'])]
processed_full = processed_full.sort_values(['date','tic'])
processed_full = processed_full.fillna(0)
processed_full.sort_values(['date','tic'],ignore_index=True).head(5)
processed_full.to_csv('processed_full.csv')
return processed_full
#You can add technical indicators and turbulence factor to dataframe
#Just set the use_technical_indicator=True, use_vix=True and use_turbulence=True
def create_techind():
fe = FeatureEngineer(
use_technical_indicator=True,
tech_indicator_list = config.TECHNICAL_INDICATORS_LIST,
use_vix=True,
use_turbulence=True,
user_defined_feature = False)
processed = fe.preprocess_data(df)
return processed
#Load price and technical indicator data from file if available
path_pf = '/content/processed_full.csv'
if Path(path_pf).is_file():
print('Reading processed_full data')
processed_full = pd.read_csv(path_pf)
else:
print('Creating processed_full file')
processed=create_techind()
processed_full=create_processed_full(processed)
train = data_split(processed_full, '2009-01-01','2020-07-01')
trade = data_split(processed_full, '2020-05-01','2021-10-01')
print(f'Number of training samples: {len(train)}')
print(f'Number of testing samples: {len(trade)}')
stock_dimension = len(train.tic.unique())
state_space = 1 + 2*stock_dimension + len(config.TECHNICAL_INDICATORS_LIST) * stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
#Defining the environment kwargs
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"buy_cost_pct": 0.001,
"sell_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
#Instantiate the training gym compatible environment
e_train_gym = StockTradingEnv(df = train, **env_kwargs)
#Instantiate the training environment
# Also instantiate our training gent
env_train, _ = e_train_gym.get_sb_env()
#print(type(env_train))
agent = DRLAgent(env = env_train)
#Instantiate the trading environment
e_trade_gym = StockTradingEnv(df = trade, turbulence_threshold = None, **env_kwargs)
###Output
_____no_output_____
###Markdown
TRADE PERFORMANCE CODEThe following code calculates trade performance metrics, which are then used as an objective for optimizing hyperparameter values. There are several available metrics. In this tutorial, the default choice is the ratio of average value of winning to losing trades.
###Code
#MAIN METHOD
# Calculates Trade Performance for Objective
# Called from objective method
# Returns selected trade perf metric(s)
# Requires actions and associated prices
def calc_trade_perf_metric(df_actions,
df_prices_trade,
tp_metric,
dbg=False):
df_actions_p, df_prices_p, tics = prep_data(df_actions.copy(),
df_prices_trade.copy())
# actions predicted by trained model on trade data
df_actions_p.to_csv('df_actions.csv')
# Confirms that actions, prices and tics are consistent
df_actions_s, df_prices_s, tics_prtfl = \
sync_tickers(df_actions_p.copy(),df_prices_p.copy(),tics)
# copy to ensure that tics from portfolio remains unchanged
tics = tics_prtfl.copy()
# Analysis is performed on each portfolio ticker
perf_data= collect_performance_data(df_actions_s, df_prices_s, tics)
# profit/loss for each ticker
pnl_all = calc_pnl_all(perf_data, tics)
# values for trade performance metrics
perf_results = calc_trade_perf(pnl_all)
df = pd.DataFrame.from_dict(perf_results, orient='index')
# calculate and return trade metric value as objective
m = calc_trade_metric(df,tp_metric)
print(f'Ratio Avg Win/Avg Loss: {m}')
k = str(len(tpm_hist)+1)
# save metric value
tpm_hist[k] = m
return m
# Supporting methods
def calc_trade_metric(df,metric='avgwl'):
'''# trades', '# wins', '# losses', 'wins total value', 'wins avg value',
'losses total value', 'losses avg value'''
# For this tutorial, the only metric available is the ratio of
# average values of winning to losing trades. Others are in development.
# some test cases produce no losing trades.
# The code below assigns a value as a multiple of the highest value during
# previous hp optimization runs. If the first run experiences no losses,
# a fixed value is assigned for the ratio
tpm_mult = 1.0
avgwl_no_losses = 25
if metric == 'avgwl':
if sum(df['# losses']) == 0:
try:
return max(tpm_hist.values())*tpm_mult
except ValueError:
return avgwl_no_losses
avg_w = sum(df['wins total value'])/sum(df['# wins'])
avg_l = sum(df['losses total value'])/sum(df['# losses'])
m = abs(avg_w/avg_l)
return m
def prep_data(df_actions,
df_prices_trade):
df=df_prices_trade[['date','close','tic']]
df['Date'] = pd.to_datetime(df['date'])
df = df.set_index('Date')
# set indices on both df to datetime
idx = pd.to_datetime(df_actions.index, infer_datetime_format=True)
df_actions.index=idx
tics = np.unique(df.tic)
n_tics = len(tics)
print(f'Number of tickers: {n_tics}')
print(f'Tickers: {tics}')
dategr = df.groupby('tic')
p_d={t:dategr.get_group(t).loc[:,'close'] for t in tics}
df_prices = pd.DataFrame.from_dict(p_d)
df_prices.index = df_prices.index.normalize()
return df_actions, df_prices, tics
# prepares for integrating action and price files
def link_prices_actions(df_a,
df_p):
cols_a = [t + '_a' for t in df_a.columns]
df_a.columns = cols_a
cols_p = [t + '_p' for t in df_p.columns]
df_p.columns = cols_p
return df_a, df_p
def sync_tickers(df_actions,df_tickers_p,tickers):
# Some DOW30 components may not be included in portfolio
# passed tickers includes all DOW30 components
# actions and ticker files may have different length indices
if len(df_actions) != len(df_tickers_p):
msng_dates = set(df_actions.index)^set(df_tickers_p.index)
try:
#assumption is prices has one additional timestamp (row)
df_tickers_p.drop(msng_dates,inplace=True)
except:
df_actions.drop(msng_dates,inplace=True)
df_actions, df_tickers_p = link_prices_actions(df_actions,df_tickers_p)
# identify any DOW components not in portfolio
t_not_in_a = [t for t in tickers if t + '_a' not in list(df_actions.columns)]
# remove t_not_in_a from df_tickers_p
drop_cols = [t + '_p' for t in t_not_in_a]
df_tickers_p.drop(columns=drop_cols,inplace=True)
# Tickers in portfolio
tickers_prtfl = [c.split('_')[0] for c in df_actions.columns]
return df_actions,df_tickers_p, tickers_prtfl
def collect_performance_data(dfa,dfp,tics, dbg=False):
perf_data = {}
# In current version, files columns include secondary identifier
for t in tics:
# actions: purchase/sale of DOW equities
acts = dfa['_'.join([t,'a'])].values
# ticker prices
prices = dfp['_'.join([t,'p'])].values
# market value of purchases/sales
tvals_init = np.multiply(acts,prices)
d={'actions':acts, 'prices':prices,'init_values':tvals_init}
perf_data[t]=d
return perf_data
def calc_pnl_all(perf_dict, tics_all):
# calculate profit/loss for each ticker
print(f'Calculating profit/loss for each ticker')
pnl_all = {}
for tic in tics_all:
pnl_t = []
tic_data = perf_dict[tic]
init_values = tic_data['init_values']
acts = tic_data['actions']
prices = tic_data['prices']
cs = np.cumsum(acts)
args_s = [i + 1 for i in range(len(cs) - 1) if cs[i + 1] < cs[i]]
# tic actions with no sales
if not args_s:
pnl = complete_calc_buyonly(acts, prices, init_values)
pnl_all[tic] = pnl
continue
# copy acts: acts_rev will be revised based on closing/reducing init positions
pnl_all = execute_position_sales(tic,acts,prices,args_s,pnl_all)
return pnl_all
def complete_calc_buyonly(actions, prices, init_values):
# calculate final pnl for each ticker assuming no sales
fnl_price = prices[-1]
final_values = np.multiply(fnl_price, actions)
pnl = np.subtract(final_values, init_values)
return pnl
def execute_position_sales(tic,acts,prices,args_s,pnl_all):
# calculate final pnl for each ticker with sales
pnl_t = []
acts_rev = acts.copy()
# location of sales transactions
for s in args_s: # s is scaler
# price_s = [prices[s]]
act_s = [acts_rev[s]]
args_b = [i for i in range(s) if acts_rev[i] > 0]
prcs_init_trades = prices[args_b]
acts_init_trades = acts_rev[args_b]
# update actions for sales
# reduce/eliminate init values through trades
# always start with earliest purchase that has not been closed through sale
# selectors for purchase and sales trades
# find earliest remaining purchase
arg_sel = min(args_b)
# sel_s = len(acts_trades) - 1
# closing part/all of earliest init trade not yet closed
# sales actions are negative
# in this test case, abs_val of init and sales share counts are same
# zero-out sales actions
# market value of sale
# max number of shares to be closed: may be less than # originally purchased
acts_shares = min(abs(act_s.pop()), acts_rev[arg_sel])
# mv of shares when purchased
mv_p = abs(acts_shares * prices[arg_sel])
# mv of sold shares
mv_s = abs(acts_shares * prices[s])
# calc pnl
pnl = mv_s - mv_p
# reduce init share count
# close all/part of init purchase
acts_rev[arg_sel] -= acts_shares
acts_rev[s] += acts_shares
# calculate pnl for trade
# value of associated purchase
# find earliest non-zero positive act in acts_revs
pnl_t.append(pnl)
pnl_op = calc_pnl_for_open_positions(acts_rev, prices)
#pnl_op is list
# add pnl_op results (if any) to pnl_t (both lists)
pnl_t.extend(pnl_op)
#print(f'Total pnl for {tic}: {np.sum(pnl_t)}')
pnl_all[tic] = np.array(pnl_t)
return pnl_all
def calc_pnl_for_open_positions(acts,prices):
# identify any positive share values after accounting for sales
pnl = []
fp = prices[-1] # last price
open_pos_arg = np.argwhere(acts>0)
if len(open_pos_arg)==0:return pnl # no open positions
mkt_vals_open = np.multiply(acts[open_pos_arg], prices[open_pos_arg])
# mkt val at end of testing period
# treat as trades for purposes of calculating pnl at end of testing period
mkt_vals_final = np.multiply(fp, acts[open_pos_arg])
pnl_a = np.subtract(mkt_vals_final, mkt_vals_open)
#convert to list
pnl = [i[0] for i in pnl_a.tolist()]
#print(f'Market value of open positions at end of testing {pnl}')
return pnl
def calc_trade_perf(pnl_d):
# calculate trade performance metrics
perf_results = {}
for t,pnl in pnl_d.items():
wins = pnl[pnl>0] # total val
losses = pnl[pnl<0]
n_wins = len(wins)
n_losses = len(losses)
n_trades = n_wins + n_losses
wins_val = np.sum(wins)
losses_val = np.sum(losses)
wins_avg = 0 if n_wins==0 else np.mean(wins)
#print(f'{t} n_wins: {n_wins} n_losses: {n_losses}')
losses_avg = 0 if n_losses==0 else np.mean(losses)
d = {'# trades':n_trades,'# wins':n_wins,'# losses':n_losses,
'wins total value':wins_val, 'wins avg value':wins_avg,
'losses total value':losses_val, 'losses avg value':losses_avg,}
perf_results[t] = d
return perf_results
###Output
_____no_output_____
###Markdown
TUNING HYPERPARAMETERS USING OPTUNA1. Go to this [link](https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/utils/hyperparams_opt.py), you will find all possible hyperparamters to tune for all the models.2. For your model, grab those hyperparamters which you want to optimize and then return a dictionary of hyperparamters.3. There is a feature in Optuna called as hyperparamters importance, you can point out those hyperparamters which are important for tuning.4. By default Optuna use [TPESampler](https://www.youtube.com/watch?v=tdwgR1AqQ8Y) for sampling hyperparameters from the search space.
###Code
def sample_ddpg_params(trial:optuna.Trial):
# Size of the replay buffer
buffer_size = trial.suggest_categorical("buffer_size", [int(1e4), int(1e5), int(1e6)])
learning_rate = trial.suggest_loguniform("learning_rate", 1e-5, 1)
batch_size = trial.suggest_categorical("batch_size", [32, 64, 128, 256, 512])
return {"buffer_size": buffer_size,
"learning_rate":learning_rate,
"batch_size":batch_size}
###Output
_____no_output_____
###Markdown
*OPTIONAL CODE FOR SAMPLING HYPERPARAMETERS*Replace current call in function *objective* with `hyperparameters = sample_ddpg_params_all(trial)`
###Code
def sample_ddpg_params_all(trial:optuna.Trial,
# fixed values from previous study
learning_rate=0.0103,
batch_size=128,
buffer_size=int(1e6)):
gamma = trial.suggest_categorical("gamma", [0.94, 0.96, 0.98])
# Polyak coeff
tau = trial.suggest_categorical("tau", [0.08, 0.1, 0.12])
train_freq = trial.suggest_categorical("train_freq", [512,768,1024])
gradient_steps = train_freq
noise_type = trial.suggest_categorical("noise_type", ["ornstein-uhlenbeck", "normal", None])
noise_std = trial.suggest_categorical("noise_std", [.1,.2,.3] )
# NOTE: Add "verybig" to net_arch when tuning HER (see TD3)
net_arch = trial.suggest_categorical("net_arch", ["small", "big"])
# activation_fn = trial.suggest_categorical('activation_fn', [nn.Tanh, nn.ReLU, nn.ELU, nn.LeakyReLU])
net_arch = {
"small": [64, 64],
"medium": [256, 256],
"big": [512, 512],
}[net_arch]
hyperparams = {
"batch_size": batch_size,
"buffer_size": buffer_size,
"gamma": gamma,
"gradient_steps": gradient_steps,
"learning_rate": learning_rate,
"tau": tau,
"train_freq": train_freq,
#"noise_std": noise_std,
#"noise_type": noise_type,
"policy_kwargs": dict(net_arch=net_arch)
}
return hyperparams
###Output
_____no_output_____
###Markdown
CALLBACKS1. The callback will terminate if the improvement margin is below certain point2. It will terminate after certain number of trial_number are reached, not before that3. It will hold its patience to reach the threshold
###Code
class LoggingCallback:
def __init__(self,threshold,trial_number,patience):
'''
threshold:int tolerance for increase in objective
trial_number: int Prune after minimum number of trials
patience: int patience for the threshold
'''
self.threshold = threshold
self.trial_number = trial_number
self.patience = patience
print(f'Callback threshold {self.threshold}, \
trial_number {self.trial_number}, \
patience {self.patience}')
self.cb_list = [] #Trials list for which threshold is reached
def __call__(self,study:optuna.study, frozen_trial:optuna.Trial):
#Setting the best value in the current trial
study.set_user_attr("previous_best_value", study.best_value)
#Checking if the minimum number of trials have pass
if frozen_trial.number >self.trial_number:
previous_best_value = study.user_attrs.get("previous_best_value",None)
#Checking if the previous and current objective values have the same sign
if previous_best_value * study.best_value >=0:
#Checking for the threshold condition
if abs(previous_best_value-study.best_value) < self.threshold:
self.cb_list.append(frozen_trial.number)
#If threshold is achieved for the patience amount of time
if len(self.cb_list)>self.patience:
print('The study stops now...')
print('With number',frozen_trial.number ,'and value ',frozen_trial.value)
print('The previous and current best values are {} and {} respectively'
.format(previous_best_value, study.best_value))
study.stop()
from IPython.display import clear_output
import sys
os.makedirs("models",exist_ok=True)
def objective(trial:optuna.Trial):
#Trial will suggest a set of hyperparamters from the specified range
# Optional to optimize larger set of parameters
# hyperparameters = sample_ddpg_params_all(trial)
# Optimize buffer size, batch size, learning rate
hyperparameters = sample_ddpg_params(trial)
#print(f'Hyperparameters from objective: {hyperparameters.keys()}')
policy_kwargs = None # default
if 'policy_kwargs' in hyperparameters.keys():
policy_kwargs = hyperparameters['policy_kwargs']
del hyperparameters['policy_kwargs']
#print(f'Policy keyword arguments {policy_kwargs}')
model_ddpg = agent.get_model("ddpg",
policy_kwargs = policy_kwargs,
model_kwargs = hyperparameters )
#You can increase it for better comparison
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name="ddpg",
total_timesteps=total_timesteps)
trained_ddpg.save('models/ddpg_{}.pth'.format(trial.number))
clear_output(wait=True)
#For the given hyperparamters, determine the account value in the trading period
df_account_value, df_actions = DRLAgent.DRL_prediction(
model=trained_ddpg,
environment = e_trade_gym)
# Calculate trade performance metric
# Currently ratio of average win and loss market values
tpm = calc_trade_perf_metric(df_actions,trade,tp_metric)
return tpm
#Create a study object and specify the direction as 'maximize'
#As you want to maximize sharpe
#Pruner stops not promising iterations
#Use a pruner, else you will get error related to divergence of model
#You can also use Multivariate samplere
#sampler = optuna.samplers.TPESampler(multivarite=True,seed=42)
sampler = optuna.samplers.TPESampler()
study = optuna.create_study(study_name="ddpg_study",direction='maximize',
sampler = sampler, pruner=optuna.pruners.HyperbandPruner())
logging_callback = LoggingCallback(threshold=lc_threshold,
patience=lc_patience,
trial_number=lc_trial_number)
#You can increase the n_trials for a better search space scanning
study.optimize(objective, n_trials=n_trials,catch=(ValueError,),callbacks=[logging_callback])
joblib.dump(study, "final_ddpg_study__.pkl")
#Get the best hyperparamters
print('Hyperparameters after tuning',study.best_params)
print('Hyperparameters before tuning',config.DDPG_PARAMS)
study.best_trial
from stable_baselines3 import DDPG
tuned_model_ddpg = DDPG.load('models/ddpg_{}.pth'.format(study.best_trial.number),env=env_train)
#Trading period account value with tuned model
df_account_value_tuned, df_actions_tuned = DRLAgent.DRL_prediction(
model=tuned_model_ddpg,
environment = e_trade_gym)
def add_trade_perf_metric(df_actions,
perf_stats_all,
trade,
tp_metric):
tpm = calc_trade_perf_metric(df_actions,trade,tp_metric)
trp_metric = {'Value':tpm}
df2 = pd.DataFrame(trp_metric,index=['Trade_Perf'])
perf_stats_all = perf_stats_all.append(df2)
return perf_stats_all
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
df_actions_tuned.to_csv("./"+config.RESULTS_DIR+"/tuned_actions_" +now+ '.csv')
#Backtesting with our pruned model
print("==============Get Backtest Results===========")
print("==============Pruned Model===========")
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
perf_stats_all_tuned = backtest_stats(account_value=df_account_value_tuned)
perf_stats_all_tuned = pd.DataFrame(perf_stats_all_tuned)
perf_stats_all_tuned.columns = ['Value']
# add trade performance metric
perf_stats_all_tuned = \
add_trade_perf_metric(df_actions_tuned,
perf_stats_all_tuned,
trade,
tp_metric)
perf_stats_all_tuned.to_csv("./"+config.RESULTS_DIR+"/perf_stats_all_tuned_"+now+'.csv')
#Now train with not tuned hyperaparameters
#Default config.ddpg_PARAMS
non_tuned_model_ddpg = agent.get_model("ddpg",model_kwargs = config.DDPG_PARAMS )
trained_ddpg = agent.train_model(model=non_tuned_model_ddpg,
tb_log_name='ddpg',
total_timesteps=total_timesteps)
df_account_value, df_actions = DRLAgent.DRL_prediction(
model=trained_ddpg,
environment = e_trade_gym)
#Backtesting for not tuned hyperparamters
print("==============Get Backtest Results===========")
print("============Default Hyperparameters==========")
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
perf_stats_all = backtest_stats(account_value=df_account_value)
perf_stats_all = pd.DataFrame(perf_stats_all)
perf_stats_all.columns = ['Value']
# add trade performance metric
perf_stats_all = add_trade_perf_metric(df_actions,
perf_stats_all,
trade,
tp_metric)
perf_stats_all.to_csv("./"+config.RESULTS_DIR+"/perf_stats_all_"+now+'.csv')
#Certainly you can afford more number of trials for further optimization
from optuna.visualization import plot_optimization_history
fig = plot_optimization_history(study)
#"./"+config.RESULTS_DIR+
fig.write_image("./"+config.RESULTS_DIR+"/opt_hist.png")
fig.show()
from optuna.visualization import plot_contour
from optuna.visualization import plot_edf
from optuna.visualization import plot_intermediate_values
from optuna.visualization import plot_optimization_history
from optuna.visualization import plot_parallel_coordinate
from optuna.visualization import plot_param_importances
from optuna.visualization import plot_slice
#Hyperparamters importance
try:
fig = plot_param_importances(study)
fig.write_image("./"+config.RESULTS_DIR+"/params_importances.png")
fig.show()
except:
print('Cannot calculate hyperparameter importances: no variation')
fig = plot_edf(study)
fig.write_image("./"+config.RESULTS_DIR+"/emp_dist_func.png")
fig.show()
files.download('/content/final_ddpg_study__.pkl')
###Output
_____no_output_____
###Markdown
Introduction1. This tutorial introduces *trade-based metrics* for hyperparameter optimization of FinRL models.2. As the name implies, trade-based metrics are associated with the trade activity that FinRL captures in its actions tables. In general, a trade is represented by an entry in an actions file.2. Such metrics include counts of winning and losing trades, total value of wins and losses and ratio of average market value of wins to losses.1. In this tutorial, we will be tuning hyperparameters for Stable baselines3 models using Optuna.2. The default model hyperparameters may not be adequate for your custom portfolio or custom state-space. Reinforcement learning algorithms are sensitive to hyperparameters, hence tuning is an important step.3. Hyperparamters are tuned based on an objective, which needs to be maximized or minimized. ***In this tutorial, the ratio of average winning to losing trade value is used as the objective.*** This ratio is to be ***maximized***.3. This tutorial incorporates a multi-stock framework based on the 30 stocks (aka tickers) in the DOW JONES Industrial Average. Trade metrics are calculated for each ticker and then aggregated.7.**IMPORTANT**: While the DOW stocks represent a portfolio, portfolio optimization techniques, such as the classic Markowitz mean-variance model, are not applied in this analysis. Other FinRL tutorials and examples demonstrate portfolio optimization.
###Code
#Installing FinRL
# Set colab status to trigger installs
clb = True
print(f'Preparing for colab: {clb}')
pkgs = ['FinRL', 'optuna', 'Ray/rllib','plotly','ipywidgets']
if clb:
print(f'Installing packages: {pkgs}')
# Set Variables
## Fixed
tpm_hist = {} # record tp metric values for trials
tp_metric = 'avgwl' # specified trade_param_metric: ratio avg value win/loss
## Settable by User
n_trials = 5 # number of HP optimization runs
total_timesteps = 2000 # per HP optimization run
## Logging callback params
lc_threshold=1e-5
lc_patience=15
lc_trial_number=5
%%capture
if clb:
# installing packages
!pip install pyfolio-reloaded #original pyfolio no longer maintained
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
!pip install optuna
!pip install -U "ray[rllib]"
!pip install plotly
!pip install ipywidgets
!pip install -U kaleido # enables saving plots to file
#Importing the libraries
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# matplotlib.use('Agg')
import datetime
import optuna
from pathlib import Path
from google.colab import files
%matplotlib inline
from finrl import config
from finrl import config_tickers
from optuna.integration import PyTorchLightningPruningCallback
from finrl.finrl_meta.preprocessor.yahoodownloader import YahooDownloader
from finrl.finrl_meta.preprocessor.preprocessors import FeatureEngineer, data_split
from finrl.finrl_meta.env_stock_trading.env_stocktrading import StockTradingEnv
from finrl.finrl_meta.env_stock_trading.env_stocktrading_np import StockTradingEnv as StockTradingEnv_numpy
from finrl.agents.stablebaselines3.models import DRLAgent
from finrl.agents.rllib.models import DRLAgent as DRLAgent_rllib
from finrl.finrl_meta.data_processor import DataProcessor
import joblib
from finrl.plot import backtest_stats, backtest_plot, get_daily_return, get_baseline
import ray
from pprint import pprint
import kaleido
import sys
sys.path.append("../FinRL-Library")
import itertools
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
print(f'Torch device: {device}')
###Output
_____no_output_____
###Markdown
Zipline was developed by Quantopian, which also created pyfolio. The latter is used in FinRL to calculate and display backtest results. Despite the unavailability of zipline, as reported above, pyfolio remains operational. See [here](https://github.com/quantopian/pyfolio/issues/654) for more information.
###Code
## Connect to GPU for faster processing
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Not connected to a GPU')
else:
print(gpu_info)
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
Collecting data and preprocessing1. Load DOW 30 prices2. Add technical indicators3. Create *processed_full*, the final data set for training and testingTo save time in multiple runs, if the processed_full file is available, it is read from a previously saved csv file.
###Code
#Custom ticker list dataframe download
#TODO save df to avoid download
path_pf = '/content/ticker_data.csv'
if Path(path_pf).is_file():
print('Reading ticker data')
df = pd.read_csv(path_pf)
else:
print('Downloading ticker data')
ticker_list = config_tickers.DOW_30_TICKER
df = YahooDownloader(start_date = '2009-01-01',
end_date = '2021-10-01',
ticker_list = ticker_list).fetch_data()
df.to_csv('ticker_data.csv')
def create_processed_full(processed):
list_ticker = processed["tic"].unique().tolist()
list_date = list(pd.date_range(processed['date'].min(),processed['date'].max()).astype(str))
combination = list(itertools.product(list_date,list_ticker))
processed_full = pd.DataFrame(combination,columns=["date","tic"]).merge(processed,on=["date","tic"],how="left")
processed_full = processed_full[processed_full['date'].isin(processed['date'])]
processed_full = processed_full.sort_values(['date','tic'])
processed_full = processed_full.fillna(0)
processed_full.sort_values(['date','tic'],ignore_index=True).head(5)
processed_full.to_csv('processed_full.csv')
return processed_full
#You can add technical indicators and turbulence factor to dataframe
#Just set the use_technical_indicator=True, use_vix=True and use_turbulence=True
def create_techind():
fe = FeatureEngineer(
use_technical_indicator=True,
tech_indicator_list = config.TECHNICAL_INDICATORS_LIST,
use_vix=True,
use_turbulence=True,
user_defined_feature = False)
processed = fe.preprocess_data(df)
return processed
#Load price and technical indicator data from file if available
path_pf = '/content/processed_full.csv'
if Path(path_pf).is_file():
print('Reading processed_full data')
processed_full = pd.read_csv(path_pf)
else:
print('Creating processed_full file')
processed=create_techind()
processed_full=create_processed_full(processed)
train = data_split(processed_full, '2009-01-01','2020-07-01')
trade = data_split(processed_full, '2020-05-01','2021-10-01')
print(f'Number of training samples: {len(train)}')
print(f'Number of testing samples: {len(trade)}')
stock_dimension = len(train.tic.unique())
state_space = 1 + 2*stock_dimension + len(config.TECHNICAL_INDICATORS_LIST) * stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
#Defining the environment kwargs
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"buy_cost_pct": 0.001,
"sell_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
#Instantiate the training gym compatible environment
e_train_gym = StockTradingEnv(df = train, **env_kwargs)
#Instantiate the training environment
# Also instantiate our training gent
env_train, _ = e_train_gym.get_sb_env()
#print(type(env_train))
agent = DRLAgent(env = env_train)
#Instantiate the trading environment
e_trade_gym = StockTradingEnv(df = trade, turbulence_threshold = None, **env_kwargs)
###Output
_____no_output_____
###Markdown
Trade performance codeThe following code calculates trade performance metrics, which are then used as an objective for optimizing hyperparameter values. There are several available metrics. In this tutorial, the default choice is the ratio of average value of winning to losing trades.
###Code
#Main method
# Calculates Trade Performance for Objective
# Called from objective method
# Returns selected trade perf metric(s)
# Requires actions and associated prices
def calc_trade_perf_metric(df_actions,
df_prices_trade,
tp_metric,
dbg=False):
df_actions_p, df_prices_p, tics = prep_data(df_actions.copy(),
df_prices_trade.copy())
# actions predicted by trained model on trade data
df_actions_p.to_csv('df_actions.csv')
# Confirms that actions, prices and tics are consistent
df_actions_s, df_prices_s, tics_prtfl = \
sync_tickers(df_actions_p.copy(),df_prices_p.copy(),tics)
# copy to ensure that tics from portfolio remains unchanged
tics = tics_prtfl.copy()
# Analysis is performed on each portfolio ticker
perf_data= collect_performance_data(df_actions_s, df_prices_s, tics)
# profit/loss for each ticker
pnl_all = calc_pnl_all(perf_data, tics)
# values for trade performance metrics
perf_results = calc_trade_perf(pnl_all)
df = pd.DataFrame.from_dict(perf_results, orient='index')
# calculate and return trade metric value as objective
m = calc_trade_metric(df,tp_metric)
print(f'Ratio Avg Win/Avg Loss: {m}')
k = str(len(tpm_hist)+1)
# save metric value
tpm_hist[k] = m
return m
# Supporting methods
def calc_trade_metric(df,metric='avgwl'):
'''# trades', '# wins', '# losses', 'wins total value', 'wins avg value',
'losses total value', 'losses avg value'''
# For this tutorial, the only metric available is the ratio of
# average values of winning to losing trades. Others are in development.
# some test cases produce no losing trades.
# The code below assigns a value as a multiple of the highest value during
# previous hp optimization runs. If the first run experiences no losses,
# a fixed value is assigned for the ratio
tpm_mult = 1.0
avgwl_no_losses = 25
if metric == 'avgwl':
if sum(df['# losses']) == 0:
try:
return max(tpm_hist.values())*tpm_mult
except ValueError:
return avgwl_no_losses
avg_w = sum(df['wins total value'])/sum(df['# wins'])
avg_l = sum(df['losses total value'])/sum(df['# losses'])
m = abs(avg_w/avg_l)
return m
def prep_data(df_actions,
df_prices_trade):
df=df_prices_trade[['date','close','tic']]
df['Date'] = pd.to_datetime(df['date'])
df = df.set_index('Date')
# set indices on both df to datetime
idx = pd.to_datetime(df_actions.index, infer_datetime_format=True)
df_actions.index=idx
tics = np.unique(df.tic)
n_tics = len(tics)
print(f'Number of tickers: {n_tics}')
print(f'Tickers: {tics}')
dategr = df.groupby('tic')
p_d={t:dategr.get_group(t).loc[:,'close'] for t in tics}
df_prices = pd.DataFrame.from_dict(p_d)
df_prices.index = df_prices.index.normalize()
return df_actions, df_prices, tics
# prepares for integrating action and price files
def link_prices_actions(df_a,
df_p):
cols_a = [t + '_a' for t in df_a.columns]
df_a.columns = cols_a
cols_p = [t + '_p' for t in df_p.columns]
df_p.columns = cols_p
return df_a, df_p
def sync_tickers(df_actions,df_tickers_p,tickers):
# Some DOW30 components may not be included in portfolio
# passed tickers includes all DOW30 components
# actions and ticker files may have different length indices
if len(df_actions) != len(df_tickers_p):
msng_dates = set(df_actions.index)^set(df_tickers_p.index)
try:
#assumption is prices has one additional timestamp (row)
df_tickers_p.drop(msng_dates,inplace=True)
except:
df_actions.drop(msng_dates,inplace=True)
df_actions, df_tickers_p = link_prices_actions(df_actions,df_tickers_p)
# identify any DOW components not in portfolio
t_not_in_a = [t for t in tickers if t + '_a' not in list(df_actions.columns)]
# remove t_not_in_a from df_tickers_p
drop_cols = [t + '_p' for t in t_not_in_a]
df_tickers_p.drop(columns=drop_cols,inplace=True)
# Tickers in portfolio
tickers_prtfl = [c.split('_')[0] for c in df_actions.columns]
return df_actions,df_tickers_p, tickers_prtfl
def collect_performance_data(dfa,dfp,tics, dbg=False):
perf_data = {}
# In current version, files columns include secondary identifier
for t in tics:
# actions: purchase/sale of DOW equities
acts = dfa['_'.join([t,'a'])].values
# ticker prices
prices = dfp['_'.join([t,'p'])].values
# market value of purchases/sales
tvals_init = np.multiply(acts,prices)
d={'actions':acts, 'prices':prices,'init_values':tvals_init}
perf_data[t]=d
return perf_data
def calc_pnl_all(perf_dict, tics_all):
# calculate profit/loss for each ticker
print(f'Calculating profit/loss for each ticker')
pnl_all = {}
for tic in tics_all:
pnl_t = []
tic_data = perf_dict[tic]
init_values = tic_data['init_values']
acts = tic_data['actions']
prices = tic_data['prices']
cs = np.cumsum(acts)
args_s = [i + 1 for i in range(len(cs) - 1) if cs[i + 1] < cs[i]]
# tic actions with no sales
if not args_s:
pnl = complete_calc_buyonly(acts, prices, init_values)
pnl_all[tic] = pnl
continue
# copy acts: acts_rev will be revised based on closing/reducing init positions
pnl_all = execute_position_sales(tic,acts,prices,args_s,pnl_all)
return pnl_all
def complete_calc_buyonly(actions, prices, init_values):
# calculate final pnl for each ticker assuming no sales
fnl_price = prices[-1]
final_values = np.multiply(fnl_price, actions)
pnl = np.subtract(final_values, init_values)
return pnl
def execute_position_sales(tic,acts,prices,args_s,pnl_all):
# calculate final pnl for each ticker with sales
pnl_t = []
acts_rev = acts.copy()
# location of sales transactions
for s in args_s: # s is scaler
# price_s = [prices[s]]
act_s = [acts_rev[s]]
args_b = [i for i in range(s) if acts_rev[i] > 0]
prcs_init_trades = prices[args_b]
acts_init_trades = acts_rev[args_b]
# update actions for sales
# reduce/eliminate init values through trades
# always start with earliest purchase that has not been closed through sale
# selectors for purchase and sales trades
# find earliest remaining purchase
arg_sel = min(args_b)
# sel_s = len(acts_trades) - 1
# closing part/all of earliest init trade not yet closed
# sales actions are negative
# in this test case, abs_val of init and sales share counts are same
# zero-out sales actions
# market value of sale
# max number of shares to be closed: may be less than # originally purchased
acts_shares = min(abs(act_s.pop()), acts_rev[arg_sel])
# mv of shares when purchased
mv_p = abs(acts_shares * prices[arg_sel])
# mv of sold shares
mv_s = abs(acts_shares * prices[s])
# calc pnl
pnl = mv_s - mv_p
# reduce init share count
# close all/part of init purchase
acts_rev[arg_sel] -= acts_shares
acts_rev[s] += acts_shares
# calculate pnl for trade
# value of associated purchase
# find earliest non-zero positive act in acts_revs
pnl_t.append(pnl)
pnl_op = calc_pnl_for_open_positions(acts_rev, prices)
#pnl_op is list
# add pnl_op results (if any) to pnl_t (both lists)
pnl_t.extend(pnl_op)
#print(f'Total pnl for {tic}: {np.sum(pnl_t)}')
pnl_all[tic] = np.array(pnl_t)
return pnl_all
def calc_pnl_for_open_positions(acts,prices):
# identify any positive share values after accounting for sales
pnl = []
fp = prices[-1] # last price
open_pos_arg = np.argwhere(acts>0)
if len(open_pos_arg)==0:return pnl # no open positions
mkt_vals_open = np.multiply(acts[open_pos_arg], prices[open_pos_arg])
# mkt val at end of testing period
# treat as trades for purposes of calculating pnl at end of testing period
mkt_vals_final = np.multiply(fp, acts[open_pos_arg])
pnl_a = np.subtract(mkt_vals_final, mkt_vals_open)
#convert to list
pnl = [i[0] for i in pnl_a.tolist()]
#print(f'Market value of open positions at end of testing {pnl}')
return pnl
def calc_trade_perf(pnl_d):
# calculate trade performance metrics
perf_results = {}
for t,pnl in pnl_d.items():
wins = pnl[pnl>0] # total val
losses = pnl[pnl<0]
n_wins = len(wins)
n_losses = len(losses)
n_trades = n_wins + n_losses
wins_val = np.sum(wins)
losses_val = np.sum(losses)
wins_avg = 0 if n_wins==0 else np.mean(wins)
#print(f'{t} n_wins: {n_wins} n_losses: {n_losses}')
losses_avg = 0 if n_losses==0 else np.mean(losses)
d = {'# trades':n_trades,'# wins':n_wins,'# losses':n_losses,
'wins total value':wins_val, 'wins avg value':wins_avg,
'losses total value':losses_val, 'losses avg value':losses_avg,}
perf_results[t] = d
return perf_results
###Output
_____no_output_____
###Markdown
Tuning hyperparameters using Optuna1. Go to this [link](https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/utils/hyperparams_opt.py), you will find all possible hyperparamters to tune for all the models.2. For your model, grab those hyperparamters which you want to optimize and then return a dictionary of hyperparamters.3. There is a feature in Optuna called as hyperparamters importance, you can point out those hyperparamters which are important for tuning.4. By default Optuna use [TPESampler](https://www.youtube.com/watch?v=tdwgR1AqQ8Y) for sampling hyperparameters from the search space.
###Code
def sample_ddpg_params(trial:optuna.Trial):
# Size of the replay buffer
buffer_size = trial.suggest_categorical("buffer_size", [int(1e4), int(1e5), int(1e6)])
learning_rate = trial.suggest_loguniform("learning_rate", 1e-5, 1)
batch_size = trial.suggest_categorical("batch_size", [32, 64, 128, 256, 512])
return {"buffer_size": buffer_size,
"learning_rate":learning_rate,
"batch_size":batch_size}
###Output
_____no_output_____
###Markdown
*OPTIONAL CODE FOR SAMPLING HYPERPARAMETERS*Replace current call in function *objective* with `hyperparameters = sample_ddpg_params_all(trial)`
###Code
def sample_ddpg_params_all(trial:optuna.Trial,
# fixed values from previous study
learning_rate=0.0103,
batch_size=128,
buffer_size=int(1e6)):
gamma = trial.suggest_categorical("gamma", [0.94, 0.96, 0.98])
# Polyak coeff
tau = trial.suggest_categorical("tau", [0.08, 0.1, 0.12])
train_freq = trial.suggest_categorical("train_freq", [512,768,1024])
gradient_steps = train_freq
noise_type = trial.suggest_categorical("noise_type", ["ornstein-uhlenbeck", "normal", None])
noise_std = trial.suggest_categorical("noise_std", [.1,.2,.3] )
# NOTE: Add "verybig" to net_arch when tuning HER (see TD3)
net_arch = trial.suggest_categorical("net_arch", ["small", "big"])
# activation_fn = trial.suggest_categorical('activation_fn', [nn.Tanh, nn.ReLU, nn.ELU, nn.LeakyReLU])
net_arch = {
"small": [64, 64],
"medium": [256, 256],
"big": [512, 512],
}[net_arch]
hyperparams = {
"batch_size": batch_size,
"buffer_size": buffer_size,
"gamma": gamma,
"gradient_steps": gradient_steps,
"learning_rate": learning_rate,
"tau": tau,
"train_freq": train_freq,
#"noise_std": noise_std,
#"noise_type": noise_type,
"policy_kwargs": dict(net_arch=net_arch)
}
return hyperparams
###Output
_____no_output_____
###Markdown
Callbacks1. The callback will terminate if the improvement margin is below certain point2. It will terminate after certain number of trial_number are reached, not before that3. It will hold its patience to reach the threshold
###Code
class LoggingCallback:
def __init__(self,threshold,trial_number,patience):
'''
threshold:int tolerance for increase in objective
trial_number: int Prune after minimum number of trials
patience: int patience for the threshold
'''
self.threshold = threshold
self.trial_number = trial_number
self.patience = patience
print(f'Callback threshold {self.threshold}, \
trial_number {self.trial_number}, \
patience {self.patience}')
self.cb_list = [] #Trials list for which threshold is reached
def __call__(self,study:optuna.study, frozen_trial:optuna.Trial):
#Setting the best value in the current trial
study.set_user_attr("previous_best_value", study.best_value)
#Checking if the minimum number of trials have pass
if frozen_trial.number >self.trial_number:
previous_best_value = study.user_attrs.get("previous_best_value",None)
#Checking if the previous and current objective values have the same sign
if previous_best_value * study.best_value >=0:
#Checking for the threshold condition
if abs(previous_best_value-study.best_value) < self.threshold:
self.cb_list.append(frozen_trial.number)
#If threshold is achieved for the patience amount of time
if len(self.cb_list)>self.patience:
print('The study stops now...')
print('With number',frozen_trial.number ,'and value ',frozen_trial.value)
print('The previous and current best values are {} and {} respectively'
.format(previous_best_value, study.best_value))
study.stop()
from IPython.display import clear_output
import sys
os.makedirs("models",exist_ok=True)
def objective(trial:optuna.Trial):
#Trial will suggest a set of hyperparamters from the specified range
# Optional to optimize larger set of parameters
# hyperparameters = sample_ddpg_params_all(trial)
# Optimize buffer size, batch size, learning rate
hyperparameters = sample_ddpg_params(trial)
#print(f'Hyperparameters from objective: {hyperparameters.keys()}')
policy_kwargs = None # default
if 'policy_kwargs' in hyperparameters.keys():
policy_kwargs = hyperparameters['policy_kwargs']
del hyperparameters['policy_kwargs']
#print(f'Policy keyword arguments {policy_kwargs}')
model_ddpg = agent.get_model("ddpg",
policy_kwargs = policy_kwargs,
model_kwargs = hyperparameters )
#You can increase it for better comparison
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name="ddpg",
total_timesteps=total_timesteps)
trained_ddpg.save('models/ddpg_{}.pth'.format(trial.number))
clear_output(wait=True)
#For the given hyperparamters, determine the account value in the trading period
df_account_value, df_actions = DRLAgent.DRL_prediction(
model=trained_ddpg,
environment = e_trade_gym)
# Calculate trade performance metric
# Currently ratio of average win and loss market values
tpm = calc_trade_perf_metric(df_actions,trade,tp_metric)
return tpm
#Create a study object and specify the direction as 'maximize'
#As you want to maximize sharpe
#Pruner stops not promising iterations
#Use a pruner, else you will get error related to divergence of model
#You can also use Multivariate samplere
#sampler = optuna.samplers.TPESampler(multivarite=True,seed=42)
sampler = optuna.samplers.TPESampler()
study = optuna.create_study(study_name="ddpg_study",direction='maximize',
sampler = sampler, pruner=optuna.pruners.HyperbandPruner())
logging_callback = LoggingCallback(threshold=lc_threshold,
patience=lc_patience,
trial_number=lc_trial_number)
#You can increase the n_trials for a better search space scanning
study.optimize(objective, n_trials=n_trials,catch=(ValueError,),callbacks=[logging_callback])
joblib.dump(study, "final_ddpg_study__.pkl")
#Get the best hyperparamters
print('Hyperparameters after tuning',study.best_params)
print('Hyperparameters before tuning',config.DDPG_PARAMS)
study.best_trial
from stable_baselines3 import DDPG
tuned_model_ddpg = DDPG.load('models/ddpg_{}.pth'.format(study.best_trial.number),env=env_train)
#Trading period account value with tuned model
df_account_value_tuned, df_actions_tuned = DRLAgent.DRL_prediction(
model=tuned_model_ddpg,
environment = e_trade_gym)
def add_trade_perf_metric(df_actions,
perf_stats_all,
trade,
tp_metric):
tpm = calc_trade_perf_metric(df_actions,trade,tp_metric)
trp_metric = {'Value':tpm}
df2 = pd.DataFrame(trp_metric,index=['Trade_Perf'])
perf_stats_all = perf_stats_all.append(df2)
return perf_stats_all
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
df_actions_tuned.to_csv("./"+config.RESULTS_DIR+"/tuned_actions_" +now+ '.csv')
#Backtesting with our pruned model
print("==============Get Backtest Results===========")
print("==============Pruned Model===========")
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
perf_stats_all_tuned = backtest_stats(account_value=df_account_value_tuned)
perf_stats_all_tuned = pd.DataFrame(perf_stats_all_tuned)
perf_stats_all_tuned.columns = ['Value']
# add trade performance metric
perf_stats_all_tuned = \
add_trade_perf_metric(df_actions_tuned,
perf_stats_all_tuned,
trade,
tp_metric)
perf_stats_all_tuned.to_csv("./"+config.RESULTS_DIR+"/perf_stats_all_tuned_"+now+'.csv')
#Now train with not tuned hyperaparameters
#Default config.ddpg_PARAMS
non_tuned_model_ddpg = agent.get_model("ddpg",model_kwargs = config.DDPG_PARAMS )
trained_ddpg = agent.train_model(model=non_tuned_model_ddpg,
tb_log_name='ddpg',
total_timesteps=total_timesteps)
df_account_value, df_actions = DRLAgent.DRL_prediction(
model=trained_ddpg,
environment = e_trade_gym)
#Backtesting for not tuned hyperparamters
print("==============Get Backtest Results===========")
print("============Default Hyperparameters==========")
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
perf_stats_all = backtest_stats(account_value=df_account_value)
perf_stats_all = pd.DataFrame(perf_stats_all)
perf_stats_all.columns = ['Value']
# add trade performance metric
perf_stats_all = add_trade_perf_metric(df_actions,
perf_stats_all,
trade,
tp_metric)
perf_stats_all.to_csv("./"+config.RESULTS_DIR+"/perf_stats_all_"+now+'.csv')
#Certainly you can afford more number of trials for further optimization
from optuna.visualization import plot_optimization_history
fig = plot_optimization_history(study)
#"./"+config.RESULTS_DIR+
fig.write_image("./"+config.RESULTS_DIR+"/opt_hist.png")
fig.show()
from optuna.visualization import plot_contour
from optuna.visualization import plot_edf
from optuna.visualization import plot_intermediate_values
from optuna.visualization import plot_optimization_history
from optuna.visualization import plot_parallel_coordinate
from optuna.visualization import plot_param_importances
from optuna.visualization import plot_slice
#Hyperparamters importance
try:
fig = plot_param_importances(study)
fig.write_image("./"+config.RESULTS_DIR+"/params_importances.png")
fig.show()
except:
print('Cannot calculate hyperparameter importances: no variation')
fig = plot_edf(study)
fig.write_image("./"+config.RESULTS_DIR+"/emp_dist_func.png")
fig.show()
files.download('/content/final_ddpg_study__.pkl')
###Output
_____no_output_____
###Markdown
Introduction1. This tutorial introduces *trade-based metrics* for hyperparameter optimization of FinRL models.2. As the name implies, trade-based metrics are associated with the trade activity that FinRL captures in its actions tables. In general, a trade is represented by an entry in an actions file.2. Such metrics include counts of winning and losing trades, total value of wins and losses and ratio of average market value of wins to losses.1. In this tutorial, we will be tuning hyperparameters for Stable baselines3 models using Optuna.2. The default model hyperparameters may not be adequate for your custom portfolio or custom state-space. Reinforcement learning algorithms are sensitive to hyperparameters, hence tuning is an important step.3. Hyperparamters are tuned based on an objective, which needs to be maximized or minimized. ***In this tutorial, the ratio of average winning to losing trade value is used as the objective.*** This ratio is to be ***maximized***.3. This tutorial incorporates a multi-stock framework based on the 30 stocks (aka tickers) in the DOW JONES Industrial Average. Trade metrics are calculated for each ticker and then aggregated.7.**IMPORTANT**: While the DOW stocks represent a portfolio, portfolio optimization techniques, such as the classic Markowitz mean-variance model, are not applied in this analysis. Other FinRL tutorials and examples demonstrate portfolio optimization.
###Code
#Installing FinRL
# Set colab status to trigger installs
clb = True
print(f'Preparing for colab: {clb}')
pkgs = ['FinRL', 'optuna', 'Ray/rllib','plotly','ipywidgets']
if clb:
print(f'Installing packages: {pkgs}')
# Set Variables
## Fixed
tpm_hist = {} # record tp metric values for trials
tp_metric = 'avgwl' # specified trade_param_metric: ratio avg value win/loss
## Settable by User
n_trials = 5 # number of HP optimization runs
total_timesteps = 2000 # per HP optimization run
## Logging callback params
lc_threshold=1e-5
lc_patience=15
lc_trial_number=5
%%capture
if clb:
# installing packages
!pip install pyfolio-reloaded #original pyfolio no longer maintained
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
!pip install optuna
!pip install -U "ray[rllib]"
!pip install plotly
!pip install ipywidgets
!pip install -U kaleido # enables saving plots to file
#Importing the libraries
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# matplotlib.use('Agg')
import datetime
import optuna
from pathlib import Path
from google.colab import files
%matplotlib inline
from finrl import config
from finrl import config_tickers
from optuna.integration import PyTorchLightningPruningCallback
from finrl.finrl_meta.preprocessor.yahoodownloader import YahooDownloader
from finrl.finrl_meta.preprocessor.preprocessors import FeatureEngineer, data_split
from finrl.finrl_meta.env_stock_trading.env_stocktrading import StockTradingEnv
from finrl.finrl_meta.env_stock_trading.env_stocktrading_np import StockTradingEnv as StockTradingEnv_numpy
from finrl.drl_agents.stablebaselines3.models import DRLAgent
from finrl.drl_agents.rllib.models import DRLAgent as DRLAgent_rllib
from finrl.finrl_meta.data_processor import DataProcessor
import joblib
from finrl.plot import backtest_stats, backtest_plot, get_daily_return, get_baseline
import ray
from pprint import pprint
import kaleido
import sys
sys.path.append("../FinRL-Library")
import itertools
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
print(f'Torch device: {device}')
###Output
_____no_output_____
###Markdown
Zipline was developed by Quantopian, which also created pyfolio. The latter is used in FinRL to calculate and display backtest results. Despite the unavailability of zipline, as reported above, pyfolio remains operational. See [here](https://github.com/quantopian/pyfolio/issues/654) for more information.
###Code
## Connect to GPU for faster processing
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Not connected to a GPU')
else:
print(gpu_info)
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
###Output
_____no_output_____
###Markdown
Collecting data and preprocessing1. Load DOW 30 prices2. Add technical indicators3. Create *processed_full*, the final data set for training and testingTo save time in multiple runs, if the processed_full file is available, it is read from a previously saved csv file.
###Code
#Custom ticker list dataframe download
#TODO save df to avoid download
path_pf = '/content/ticker_data.csv'
if Path(path_pf).is_file():
print('Reading ticker data')
df = pd.read_csv(path_pf)
else:
print('Downloading ticker data')
ticker_list = config_tickers.DOW_30_TICKER
df = YahooDownloader(start_date = '2009-01-01',
end_date = '2021-10-01',
ticker_list = ticker_list).fetch_data()
df.to_csv('ticker_data.csv')
def create_processed_full(processed):
list_ticker = processed["tic"].unique().tolist()
list_date = list(pd.date_range(processed['date'].min(),processed['date'].max()).astype(str))
combination = list(itertools.product(list_date,list_ticker))
processed_full = pd.DataFrame(combination,columns=["date","tic"]).merge(processed,on=["date","tic"],how="left")
processed_full = processed_full[processed_full['date'].isin(processed['date'])]
processed_full = processed_full.sort_values(['date','tic'])
processed_full = processed_full.fillna(0)
processed_full.sort_values(['date','tic'],ignore_index=True).head(5)
processed_full.to_csv('processed_full.csv')
return processed_full
#You can add technical indicators and turbulence factor to dataframe
#Just set the use_technical_indicator=True, use_vix=True and use_turbulence=True
def create_techind():
fe = FeatureEngineer(
use_technical_indicator=True,
tech_indicator_list = config.TECHNICAL_INDICATORS_LIST,
use_vix=True,
use_turbulence=True,
user_defined_feature = False)
processed = fe.preprocess_data(df)
return processed
#Load price and technical indicator data from file if available
path_pf = '/content/processed_full.csv'
if Path(path_pf).is_file():
print('Reading processed_full data')
processed_full = pd.read_csv(path_pf)
else:
print('Creating processed_full file')
processed=create_techind()
processed_full=create_processed_full(processed)
train = data_split(processed_full, '2009-01-01','2020-07-01')
trade = data_split(processed_full, '2020-05-01','2021-10-01')
print(f'Number of training samples: {len(train)}')
print(f'Number of testing samples: {len(trade)}')
stock_dimension = len(train.tic.unique())
state_space = 1 + 2*stock_dimension + len(config.TECHNICAL_INDICATORS_LIST) * stock_dimension
print(f"Stock Dimension: {stock_dimension}, State Space: {state_space}")
#Defining the environment kwargs
env_kwargs = {
"hmax": 100,
"initial_amount": 1000000,
"buy_cost_pct": 0.001,
"sell_cost_pct": 0.001,
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4
}
#Instantiate the training gym compatible environment
e_train_gym = StockTradingEnv(df = train, **env_kwargs)
#Instantiate the training environment
# Also instantiate our training gent
env_train, _ = e_train_gym.get_sb_env()
#print(type(env_train))
agent = DRLAgent(env = env_train)
#Instantiate the trading environment
e_trade_gym = StockTradingEnv(df = trade, turbulence_threshold = None, **env_kwargs)
###Output
_____no_output_____
###Markdown
Trade performance codeThe following code calculates trade performance metrics, which are then used as an objective for optimizing hyperparameter values. There are several available metrics. In this tutorial, the default choice is the ratio of average value of winning to losing trades.
###Code
#Main method
# Calculates Trade Performance for Objective
# Called from objective method
# Returns selected trade perf metric(s)
# Requires actions and associated prices
def calc_trade_perf_metric(df_actions,
df_prices_trade,
tp_metric,
dbg=False):
df_actions_p, df_prices_p, tics = prep_data(df_actions.copy(),
df_prices_trade.copy())
# actions predicted by trained model on trade data
df_actions_p.to_csv('df_actions.csv')
# Confirms that actions, prices and tics are consistent
df_actions_s, df_prices_s, tics_prtfl = \
sync_tickers(df_actions_p.copy(),df_prices_p.copy(),tics)
# copy to ensure that tics from portfolio remains unchanged
tics = tics_prtfl.copy()
# Analysis is performed on each portfolio ticker
perf_data= collect_performance_data(df_actions_s, df_prices_s, tics)
# profit/loss for each ticker
pnl_all = calc_pnl_all(perf_data, tics)
# values for trade performance metrics
perf_results = calc_trade_perf(pnl_all)
df = pd.DataFrame.from_dict(perf_results, orient='index')
# calculate and return trade metric value as objective
m = calc_trade_metric(df,tp_metric)
print(f'Ratio Avg Win/Avg Loss: {m}')
k = str(len(tpm_hist)+1)
# save metric value
tpm_hist[k] = m
return m
# Supporting methods
def calc_trade_metric(df,metric='avgwl'):
'''# trades', '# wins', '# losses', 'wins total value', 'wins avg value',
'losses total value', 'losses avg value'''
# For this tutorial, the only metric available is the ratio of
# average values of winning to losing trades. Others are in development.
# some test cases produce no losing trades.
# The code below assigns a value as a multiple of the highest value during
# previous hp optimization runs. If the first run experiences no losses,
# a fixed value is assigned for the ratio
tpm_mult = 1.0
avgwl_no_losses = 25
if metric == 'avgwl':
if sum(df['# losses']) == 0:
try:
return max(tpm_hist.values())*tpm_mult
except ValueError:
return avgwl_no_losses
avg_w = sum(df['wins total value'])/sum(df['# wins'])
avg_l = sum(df['losses total value'])/sum(df['# losses'])
m = abs(avg_w/avg_l)
return m
def prep_data(df_actions,
df_prices_trade):
df=df_prices_trade[['date','close','tic']]
df['Date'] = pd.to_datetime(df['date'])
df = df.set_index('Date')
# set indices on both df to datetime
idx = pd.to_datetime(df_actions.index, infer_datetime_format=True)
df_actions.index=idx
tics = np.unique(df.tic)
n_tics = len(tics)
print(f'Number of tickers: {n_tics}')
print(f'Tickers: {tics}')
dategr = df.groupby('tic')
p_d={t:dategr.get_group(t).loc[:,'close'] for t in tics}
df_prices = pd.DataFrame.from_dict(p_d)
df_prices.index = df_prices.index.normalize()
return df_actions, df_prices, tics
# prepares for integrating action and price files
def link_prices_actions(df_a,
df_p):
cols_a = [t + '_a' for t in df_a.columns]
df_a.columns = cols_a
cols_p = [t + '_p' for t in df_p.columns]
df_p.columns = cols_p
return df_a, df_p
def sync_tickers(df_actions,df_tickers_p,tickers):
# Some DOW30 components may not be included in portfolio
# passed tickers includes all DOW30 components
# actions and ticker files may have different length indices
if len(df_actions) != len(df_tickers_p):
msng_dates = set(df_actions.index)^set(df_tickers_p.index)
try:
#assumption is prices has one additional timestamp (row)
df_tickers_p.drop(msng_dates,inplace=True)
except:
df_actions.drop(msng_dates,inplace=True)
df_actions, df_tickers_p = link_prices_actions(df_actions,df_tickers_p)
# identify any DOW components not in portfolio
t_not_in_a = [t for t in tickers if t + '_a' not in list(df_actions.columns)]
# remove t_not_in_a from df_tickers_p
drop_cols = [t + '_p' for t in t_not_in_a]
df_tickers_p.drop(columns=drop_cols,inplace=True)
# Tickers in portfolio
tickers_prtfl = [c.split('_')[0] for c in df_actions.columns]
return df_actions,df_tickers_p, tickers_prtfl
def collect_performance_data(dfa,dfp,tics, dbg=False):
perf_data = {}
# In current version, files columns include secondary identifier
for t in tics:
# actions: purchase/sale of DOW equities
acts = dfa['_'.join([t,'a'])].values
# ticker prices
prices = dfp['_'.join([t,'p'])].values
# market value of purchases/sales
tvals_init = np.multiply(acts,prices)
d={'actions':acts, 'prices':prices,'init_values':tvals_init}
perf_data[t]=d
return perf_data
def calc_pnl_all(perf_dict, tics_all):
# calculate profit/loss for each ticker
print(f'Calculating profit/loss for each ticker')
pnl_all = {}
for tic in tics_all:
pnl_t = []
tic_data = perf_dict[tic]
init_values = tic_data['init_values']
acts = tic_data['actions']
prices = tic_data['prices']
cs = np.cumsum(acts)
args_s = [i + 1 for i in range(len(cs) - 1) if cs[i + 1] < cs[i]]
# tic actions with no sales
if not args_s:
pnl = complete_calc_buyonly(acts, prices, init_values)
pnl_all[tic] = pnl
continue
# copy acts: acts_rev will be revised based on closing/reducing init positions
pnl_all = execute_position_sales(tic,acts,prices,args_s,pnl_all)
return pnl_all
def complete_calc_buyonly(actions, prices, init_values):
# calculate final pnl for each ticker assuming no sales
fnl_price = prices[-1]
final_values = np.multiply(fnl_price, actions)
pnl = np.subtract(final_values, init_values)
return pnl
def execute_position_sales(tic,acts,prices,args_s,pnl_all):
# calculate final pnl for each ticker with sales
pnl_t = []
acts_rev = acts.copy()
# location of sales transactions
for s in args_s: # s is scaler
# price_s = [prices[s]]
act_s = [acts_rev[s]]
args_b = [i for i in range(s) if acts_rev[i] > 0]
prcs_init_trades = prices[args_b]
acts_init_trades = acts_rev[args_b]
# update actions for sales
# reduce/eliminate init values through trades
# always start with earliest purchase that has not been closed through sale
# selectors for purchase and sales trades
# find earliest remaining purchase
arg_sel = min(args_b)
# sel_s = len(acts_trades) - 1
# closing part/all of earliest init trade not yet closed
# sales actions are negative
# in this test case, abs_val of init and sales share counts are same
# zero-out sales actions
# market value of sale
# max number of shares to be closed: may be less than # originally purchased
acts_shares = min(abs(act_s.pop()), acts_rev[arg_sel])
# mv of shares when purchased
mv_p = abs(acts_shares * prices[arg_sel])
# mv of sold shares
mv_s = abs(acts_shares * prices[s])
# calc pnl
pnl = mv_s - mv_p
# reduce init share count
# close all/part of init purchase
acts_rev[arg_sel] -= acts_shares
acts_rev[s] += acts_shares
# calculate pnl for trade
# value of associated purchase
# find earliest non-zero positive act in acts_revs
pnl_t.append(pnl)
pnl_op = calc_pnl_for_open_positions(acts_rev, prices)
#pnl_op is list
# add pnl_op results (if any) to pnl_t (both lists)
pnl_t.extend(pnl_op)
#print(f'Total pnl for {tic}: {np.sum(pnl_t)}')
pnl_all[tic] = np.array(pnl_t)
return pnl_all
def calc_pnl_for_open_positions(acts,prices):
# identify any positive share values after accounting for sales
pnl = []
fp = prices[-1] # last price
open_pos_arg = np.argwhere(acts>0)
if len(open_pos_arg)==0:return pnl # no open positions
mkt_vals_open = np.multiply(acts[open_pos_arg], prices[open_pos_arg])
# mkt val at end of testing period
# treat as trades for purposes of calculating pnl at end of testing period
mkt_vals_final = np.multiply(fp, acts[open_pos_arg])
pnl_a = np.subtract(mkt_vals_final, mkt_vals_open)
#convert to list
pnl = [i[0] for i in pnl_a.tolist()]
#print(f'Market value of open positions at end of testing {pnl}')
return pnl
def calc_trade_perf(pnl_d):
# calculate trade performance metrics
perf_results = {}
for t,pnl in pnl_d.items():
wins = pnl[pnl>0] # total val
losses = pnl[pnl<0]
n_wins = len(wins)
n_losses = len(losses)
n_trades = n_wins + n_losses
wins_val = np.sum(wins)
losses_val = np.sum(losses)
wins_avg = 0 if n_wins==0 else np.mean(wins)
#print(f'{t} n_wins: {n_wins} n_losses: {n_losses}')
losses_avg = 0 if n_losses==0 else np.mean(losses)
d = {'# trades':n_trades,'# wins':n_wins,'# losses':n_losses,
'wins total value':wins_val, 'wins avg value':wins_avg,
'losses total value':losses_val, 'losses avg value':losses_avg,}
perf_results[t] = d
return perf_results
###Output
_____no_output_____
###Markdown
Tuning hyperparameters using Optuna1. Go to this [link](https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/utils/hyperparams_opt.py), you will find all possible hyperparamters to tune for all the models.2. For your model, grab those hyperparamters which you want to optimize and then return a dictionary of hyperparamters.3. There is a feature in Optuna called as hyperparamters importance, you can point out those hyperparamters which are important for tuning.4. By default Optuna use [TPESampler](https://www.youtube.com/watch?v=tdwgR1AqQ8Y) for sampling hyperparameters from the search space.
###Code
def sample_ddpg_params(trial:optuna.Trial):
# Size of the replay buffer
buffer_size = trial.suggest_categorical("buffer_size", [int(1e4), int(1e5), int(1e6)])
learning_rate = trial.suggest_loguniform("learning_rate", 1e-5, 1)
batch_size = trial.suggest_categorical("batch_size", [32, 64, 128, 256, 512])
return {"buffer_size": buffer_size,
"learning_rate":learning_rate,
"batch_size":batch_size}
###Output
_____no_output_____
###Markdown
*OPTIONAL CODE FOR SAMPLING HYPERPARAMETERS*Replace current call in function *objective* with `hyperparameters = sample_ddpg_params_all(trial)`
###Code
def sample_ddpg_params_all(trial:optuna.Trial,
# fixed values from previous study
learning_rate=0.0103,
batch_size=128,
buffer_size=int(1e6)):
gamma = trial.suggest_categorical("gamma", [0.94, 0.96, 0.98])
# Polyak coeff
tau = trial.suggest_categorical("tau", [0.08, 0.1, 0.12])
train_freq = trial.suggest_categorical("train_freq", [512,768,1024])
gradient_steps = train_freq
noise_type = trial.suggest_categorical("noise_type", ["ornstein-uhlenbeck", "normal", None])
noise_std = trial.suggest_categorical("noise_std", [.1,.2,.3] )
# NOTE: Add "verybig" to net_arch when tuning HER (see TD3)
net_arch = trial.suggest_categorical("net_arch", ["small", "big"])
# activation_fn = trial.suggest_categorical('activation_fn', [nn.Tanh, nn.ReLU, nn.ELU, nn.LeakyReLU])
net_arch = {
"small": [64, 64],
"medium": [256, 256],
"big": [512, 512],
}[net_arch]
hyperparams = {
"batch_size": batch_size,
"buffer_size": buffer_size,
"gamma": gamma,
"gradient_steps": gradient_steps,
"learning_rate": learning_rate,
"tau": tau,
"train_freq": train_freq,
#"noise_std": noise_std,
#"noise_type": noise_type,
"policy_kwargs": dict(net_arch=net_arch)
}
return hyperparams
###Output
_____no_output_____
###Markdown
Callbacks1. The callback will terminate if the improvement margin is below certain point2. It will terminate after certain number of trial_number are reached, not before that3. It will hold its patience to reach the threshold
###Code
class LoggingCallback:
def __init__(self,threshold,trial_number,patience):
'''
threshold:int tolerance for increase in objective
trial_number: int Prune after minimum number of trials
patience: int patience for the threshold
'''
self.threshold = threshold
self.trial_number = trial_number
self.patience = patience
print(f'Callback threshold {self.threshold}, \
trial_number {self.trial_number}, \
patience {self.patience}')
self.cb_list = [] #Trials list for which threshold is reached
def __call__(self,study:optuna.study, frozen_trial:optuna.Trial):
#Setting the best value in the current trial
study.set_user_attr("previous_best_value", study.best_value)
#Checking if the minimum number of trials have pass
if frozen_trial.number >self.trial_number:
previous_best_value = study.user_attrs.get("previous_best_value",None)
#Checking if the previous and current objective values have the same sign
if previous_best_value * study.best_value >=0:
#Checking for the threshold condition
if abs(previous_best_value-study.best_value) < self.threshold:
self.cb_list.append(frozen_trial.number)
#If threshold is achieved for the patience amount of time
if len(self.cb_list)>self.patience:
print('The study stops now...')
print('With number',frozen_trial.number ,'and value ',frozen_trial.value)
print('The previous and current best values are {} and {} respectively'
.format(previous_best_value, study.best_value))
study.stop()
from IPython.display import clear_output
import sys
os.makedirs("models",exist_ok=True)
def objective(trial:optuna.Trial):
#Trial will suggest a set of hyperparamters from the specified range
# Optional to optimize larger set of parameters
# hyperparameters = sample_ddpg_params_all(trial)
# Optimize buffer size, batch size, learning rate
hyperparameters = sample_ddpg_params(trial)
#print(f'Hyperparameters from objective: {hyperparameters.keys()}')
policy_kwargs = None # default
if 'policy_kwargs' in hyperparameters.keys():
policy_kwargs = hyperparameters['policy_kwargs']
del hyperparameters['policy_kwargs']
#print(f'Policy keyword arguments {policy_kwargs}')
model_ddpg = agent.get_model("ddpg",
policy_kwargs = policy_kwargs,
model_kwargs = hyperparameters )
#You can increase it for better comparison
trained_ddpg = agent.train_model(model=model_ddpg,
tb_log_name="ddpg",
total_timesteps=total_timesteps)
trained_ddpg.save('models/ddpg_{}.pth'.format(trial.number))
clear_output(wait=True)
#For the given hyperparamters, determine the account value in the trading period
df_account_value, df_actions = DRLAgent.DRL_prediction(
model=trained_ddpg,
environment = e_trade_gym)
# Calculate trade performance metric
# Currently ratio of average win and loss market values
tpm = calc_trade_perf_metric(df_actions,trade,tp_metric)
return tpm
#Create a study object and specify the direction as 'maximize'
#As you want to maximize sharpe
#Pruner stops not promising iterations
#Use a pruner, else you will get error related to divergence of model
#You can also use Multivariate samplere
#sampler = optuna.samplers.TPESampler(multivarite=True,seed=42)
sampler = optuna.samplers.TPESampler()
study = optuna.create_study(study_name="ddpg_study",direction='maximize',
sampler = sampler, pruner=optuna.pruners.HyperbandPruner())
logging_callback = LoggingCallback(threshold=lc_threshold,
patience=lc_patience,
trial_number=lc_trial_number)
#You can increase the n_trials for a better search space scanning
study.optimize(objective, n_trials=n_trials,catch=(ValueError,),callbacks=[logging_callback])
joblib.dump(study, "final_ddpg_study__.pkl")
#Get the best hyperparamters
print('Hyperparameters after tuning',study.best_params)
print('Hyperparameters before tuning',config.DDPG_PARAMS)
study.best_trial
from stable_baselines3 import DDPG
tuned_model_ddpg = DDPG.load('models/ddpg_{}.pth'.format(study.best_trial.number),env=env_train)
#Trading period account value with tuned model
df_account_value_tuned, df_actions_tuned = DRLAgent.DRL_prediction(
model=tuned_model_ddpg,
environment = e_trade_gym)
def add_trade_perf_metric(df_actions,
perf_stats_all,
trade,
tp_metric):
tpm = calc_trade_perf_metric(df_actions,trade,tp_metric)
trp_metric = {'Value':tpm}
df2 = pd.DataFrame(trp_metric,index=['Trade_Perf'])
perf_stats_all = perf_stats_all.append(df2)
return perf_stats_all
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
df_actions_tuned.to_csv("./"+config.RESULTS_DIR+"/tuned_actions_" +now+ '.csv')
#Backtesting with our pruned model
print("==============Get Backtest Results===========")
print("==============Pruned Model===========")
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
perf_stats_all_tuned = backtest_stats(account_value=df_account_value_tuned)
perf_stats_all_tuned = pd.DataFrame(perf_stats_all_tuned)
perf_stats_all_tuned.columns = ['Value']
# add trade performance metric
perf_stats_all_tuned = \
add_trade_perf_metric(df_actions_tuned,
perf_stats_all_tuned,
trade,
tp_metric)
perf_stats_all_tuned.to_csv("./"+config.RESULTS_DIR+"/perf_stats_all_tuned_"+now+'.csv')
#Now train with not tuned hyperaparameters
#Default config.ddpg_PARAMS
non_tuned_model_ddpg = agent.get_model("ddpg",model_kwargs = config.DDPG_PARAMS )
trained_ddpg = agent.train_model(model=non_tuned_model_ddpg,
tb_log_name='ddpg',
total_timesteps=total_timesteps)
df_account_value, df_actions = DRLAgent.DRL_prediction(
model=trained_ddpg,
environment = e_trade_gym)
#Backtesting for not tuned hyperparamters
print("==============Get Backtest Results===========")
print("============Default Hyperparameters==========")
now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')
perf_stats_all = backtest_stats(account_value=df_account_value)
perf_stats_all = pd.DataFrame(perf_stats_all)
perf_stats_all.columns = ['Value']
# add trade performance metric
perf_stats_all = add_trade_perf_metric(df_actions,
perf_stats_all,
trade,
tp_metric)
perf_stats_all.to_csv("./"+config.RESULTS_DIR+"/perf_stats_all_"+now+'.csv')
#Certainly you can afford more number of trials for further optimization
from optuna.visualization import plot_optimization_history
fig = plot_optimization_history(study)
#"./"+config.RESULTS_DIR+
fig.write_image("./"+config.RESULTS_DIR+"/opt_hist.png")
fig.show()
from optuna.visualization import plot_contour
from optuna.visualization import plot_edf
from optuna.visualization import plot_intermediate_values
from optuna.visualization import plot_optimization_history
from optuna.visualization import plot_parallel_coordinate
from optuna.visualization import plot_param_importances
from optuna.visualization import plot_slice
#Hyperparamters importance
try:
fig = plot_param_importances(study)
fig.write_image("./"+config.RESULTS_DIR+"/params_importances.png")
fig.show()
except:
print('Cannot calculate hyperparameter importances: no variation')
fig = plot_edf(study)
fig.write_image("./"+config.RESULTS_DIR+"/emp_dist_func.png")
fig.show()
files.download('/content/final_ddpg_study__.pkl')
###Output
_____no_output_____ |
customer_churn_2.ipynb | ###Markdown
We have three numerical features and as mentioned above we will visualize these features with respect to our target variable to get a better understanding of how one affects the other.Due to different ranges of values in different columns, it is possible that some values will dominate the others. To avoid that I used Min-Max scaler to scale the data.
###Code
print('Churn column values', '\n',(round(df.Churn.value_counts(normalize=True),3)*100))
#How do we select features? Features are dropped when they do not contribute significantly to the model.
X, y = df.drop('Churn',axis=1), df[['Churn']]
## to find significant features using LassoCV (all X_scaled)
#Scaling Numerical Values between 0 and 1.
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X[['tenure','TotalCharges','MonthlyCharges']] = scaler.fit_transform(X[['tenure','TotalCharges','MonthlyCharges']])
df_num=X[['tenure','TotalCharges','MonthlyCharges']]
#correlation matrix-num_cols , seems TotalCharges and tenure have correlated. Features have big correlation
#does not give a lot of info. must delete one of them
corr = df_num.corr()
plt.figure(figsize=(4,1))
sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,annot=True)
corr
X.drop(['tenure', 'customerID'], axis=1, inplace=True)
X.shape
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
#categorical columns
cat_cols = X.nunique()[X.nunique() < 6].keys().tolist()
cat_cols = [x for x in cat_cols if x not in y]
#Binary columns with 2 values
bin_cols= X.nunique()[X.nunique() == 2].keys().tolist()
#Columns more than 2 values
multi_cols = [i for i in cat_cols if i not in bin_cols]
num_cols= [x for x in X.columns if x not in cat_cols]
#Label encoding Binary columns
le = LabelEncoder()
for i in bin_cols :
X[i] = le.fit_transform(X[i])
#Duplicating columns for multi value columns
X = pd.get_dummies(data = X,columns = multi_cols )
df_dummies=X
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectFromModel
from sklearn.metrics import precision_score, recall_score, confusion_matrix, roc_curve, precision_recall_curve, accuracy_score, roc_auc_score
X_train,y_train,X_test,y_test = train_test_split(df_dummies,y,test_size=0.2)
###Output
_____no_output_____
###Markdown
- I've decided to compare three support vector machines with 3 different processes Simple SVM algorithm, before tuning
###Code
X_train_s, X_test_s, y_train_s, y_test_s = train_test_split(df_dummies, y, test_size=0.2, random_state=101)
from sklearn import svm
clf=svm.SVC()
svm_fit=clf.fit(X_train_s,y_train_s)
prediction_svm = clf.predict(X_test_s)
# Print the prediction accuracy
print (accuracy_score(y_test_s, prediction_svm))
roc_auc_score(y_test_s, prediction_svm)
fpr, tpr, thresholds = roc_curve(y_test_s,prediction_svm)
#second fpr_1,tpr_1,thresholds_2
plt.plot([0, 1], [0, 1], '--')
plt.plot(fpr, tpr, label='ROC curve svm with grid+smote (area = %0.2f)'%roc_auc_score(y_test_s, prediction_svm))
#plt.plot(fpr_1,tpr_1)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.xlim([0.0,1.05])
plt.ylim([0.0,1.05])
plt.title('ROC Curve')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Tuning SVM Parameters GridSearchCV
###Code
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [0.1, 1, 10, 100, 1000],
'gamma': [1, 0.1, 0.01, 0.001, 0.0001],
'kernel': ['rbf']}
grid = GridSearchCV(svm.SVC(), param_grid, cv=5, refit = True, verbose = 3)
grid.fit(X_train_s, y_train_s)
grid_svm = grid.predict(X_test_s)
print(roc_auc_score(y_test_s, grid_svm))
print(accuracy_score(y_test_s, grid_svm))
fpr, tpr, thresholds = roc_curve(y_test_s,grid_svm)
#second fpr_1,tpr_1,thresholds_2
plt.plot([0, 1], [0, 1], '--')
plt.plot(fpr, tpr, label='ROC curve svm with grid+smote (area = %0.2f)'%roc_auc_score(y_test_s, grid_svm))
#plt.plot(fpr_1,tpr_1)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.xlim([0.0,1.05])
plt.ylim([0.0,1.05])
plt.title('ROC Curve')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
SMOTE
###Code
from imblearn.over_sampling import SMOTE
smote=SMOTE(random_state=42,ratio=1.0)
X_train_smote,y_train_smote=smote.fit_sample(X_train_s,y_train_s)
from collections import Counter
print('Before SMOTE:', Counter(y_train_s))
print('After SMOTE:', Counter(y_train_smote))
X_train_2, X_test_2, y_train_2, y_test_2 = train_test_split(X_train_smote, y_train_smote, test_size=0.2, random_state=20)
grid.fit(X_train_smote, y_train_smote)
grid_smote_svm = grid.predict(X_test_s)
print(grid_smote_svm)
accuracy_score(y_test_s,grid_smote_svm)
roc_auc_score(y_test_s, grid_smote_svm)
###Output
_____no_output_____
###Markdown
Random Under Sample
###Code
from imblearn.under_sampling import RandomUnderSampler #decrease majority class, make same size with minority.
rus = RandomUnderSampler(random_state=42)
X_train_rus,y_train_rus=rus.fit_resample(X_train_s,y_train_s)
X_train_3, X_test_3, y_train_3, y_test_3 = train_test_split(X_train_rus, y_train_rus, test_size=0.2, random_state=20)
grid.fit(X_train_rus ,y_train_rus)
grid_rus_svm = grid.predict(X_test_s)
accuracy_score(y_test_s, grid_rus_svm)
roc_auc_score(y_test_s, grid_rus_svm)
fpr, tpr, thresholds = roc_curve(y_test_s,prediction_svm)
fpr_1, tpr_1, thresholds_1 = roc_curve(y_test_s,grid_svm)
fpr_2, tpr_2, thresholds_2 = roc_curve(y_test_s,grid_smote_svm)
#second fpr_1,tpr_1,thresholds_2
plt.plot([0, 1], [0, 1], '--')
plt.plot(fpr, tpr, label='svm without tuning (area = %0.2f)'%roc_auc_score(y_test_s, prediction_svm))
plt.plot(fpr_1, tpr_1, label=' svm tuned (area = %0.2f)'%roc_auc_score(y_test_s, grid_svm))
plt.plot(fpr_2, tpr_2, label='SVM tuned balanced (area = %0.2f)'%roc_auc_score(y_test_s, grid_smote_svm))
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.xlim([0.0,1.05])
plt.ylim([0.0,1.05])
plt.title('ROC Curve')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____ |
Two Pointer/1018/923. 3Sum With Multiplicity.ipynb | ###Markdown
说明: 给定一个整数数组 A,以及一个整数 target 作为目标值,返回满足 i < j < k 且 A[i] + A[j] + A[k] == target 的元组 i, j, k 的数量。 由于结果会非常大,请返回 结果除以 10^9 + 7 的余数。示例 1: 输入:A = [1,1,2,2,3,3,4,4,5,5], target = 8 输出:20 解释: 按值枚举(A[i],A[j],A[k]): (1, 2, 5) 出现 8 次; (1, 3, 4) 出现 8 次; (2, 2, 4) 出现 2 次; (2, 3, 3) 出现 2 次。示例 2: 输入:A = [1,1,2,2,2,2], target = 5 输出:12 解释: A[i] = 1,A[j] = A[k] = 2 出现 12 次: 我们从 [1,1] 中选择一个 1,有 2 种情况, 从 [2,2,2,2] 中选出两个 2,有 6 种情况。 提示: 1、3 <= A.length <= 3000 2、0 <= A[i] <= 100 3、0 <= target <= 300
###Code
from collections import Counter
class Solution:
def threeSumMulti(self, A, target: int) -> int:
max_val = pow(10, 9) + 7
a_count = Counter(A)
count = 0
for i in a_count.keys():
a_count[i] -= 1
for j in a_count.keys():
if a_count[j] > 0:
a_count[j] -= 1
need_val = target - i - j
print(need_val, a_count[need_val])
a_count[j] += 1
a_count[i] += 1
from collections import Counter
class Solution:
def threeSumMulti(self, A, target: int) -> int:
max_val = pow(10, 9) + 7
a_count = Counter(A)
count = 0
N = len(A)
l_idx, r_idx = 0, 1
while l_idx < N:
l_val, m_val = A[l_idx], A[r_idx]
c_count = a_count.copy()
c_count[l_val] -= 1
c_count[r_val] -= 1
need_val = target - A[l_idx] - A[r_idx]
if need_val not in c_count or c_count[need_val] <= 0:
from collections import Counter
class Solution:
def threeSumMulti(self, A, target: int) -> int:
max_val = pow(10, 9) + 7
a_count = Counter(A)
keys = sorted(a_count)
count = 0
N = len(A)
print(a_count, keys)
for i, k in enumerate(keys):
n = a_count[k]
if k > target:
break
elif 3 * k == target:
count += n * (n - 1) * (n - 2) // 6 if n >= 3 else 0
elif target - 2 * k in keys:
count += a_count[target - 2 * k] * n * (n - 1) // 2 if n >= 2 else 0
for j, k_j in enumerate(keys[i+1:], i + 1):
num = target - k - k_j
if num < 0:
break
elif num in keys[j + 1:]:
count += n * a_count[k_j] * a_count[num]
return count % max_val
solution = Solution()
solution.threeSumMulti(A = [1,1,2,2,3,3,4,4,5,5], target = 8)
a = [1, 2, 3, 4, 5, 6]
print(a[1:])
###Output
[2, 3, 4, 5, 6]
|
notebooks/CIFAR100_MCBB.ipynb | ###Markdown
Download data
###Code
training_data = datasets.CIFAR10(root='data', train=True, download=True, transform=transforms.ToTensor())
test_data = datasets.CIFAR10(root='data', train=False, download=True, transform=transforms.ToTensor())
train_set, val_set = torch.utils.data.random_split(training_data,[40000,10000])
train_loader = torch.utils.data.DataLoader(train_set, batch_size=128, shuffle=True, drop_last=True, **LOADER_KWARGS)
val_loader = torch.utils.data.DataLoader(val_set, batch_size=128, shuffle=True, drop_last=True, ** LOADER_KWARGS)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=128, shuffle=True, drop_last=True, **LOADER_KWARGS)
training_loader = torch.utils.data.DataLoader(training_data, batch_size=128, shuffle=True, drop_last=True, **LOADER_KWARGS)
with open("/content/drive/MyDrive/CIFAR/training_loader.pt", "wb") as f:
torch.save(training_loader, f)
with open("/content/drive/MyDrive/CIFAR/test_loader.pt", "wb") as f:
torch.save(test_loader, f)
###Output
_____no_output_____
###Markdown
OOD dataset: SVHN
###Code
svhn_dataset = datasets.SVHN(root='..data', split='test', transform=transforms.ToTensor(), download=True)
svhn_loader = torch.utils.data.DataLoader(svhn_dataset, batch_size=128, drop_last=True, **LOADER_KWARGS)
###Output
Using downloaded and verified file: ..data/test_32x32.mat
###Markdown
Loading data in drive
###Code
with open(os.path.join(path, "training_loader.pt"), "rb") as f:
training_loader = torch.load(f)
with open(os.path.join(path, "test_loader.pt"), "rb") as f:
test_loader = torch.load(f)
print(training_loader.dataset)
print(test_loader.dataset)
###Output
Dataset CIFAR100
Number of datapoints: 50000
Root location: .data
Split: Train
StandardTransform
Transform: ToTensor()
Dataset CIFAR100
Number of datapoints: 10000
Root location: .data
Split: Test
StandardTransform
Transform: ToTensor()
###Markdown
Network
###Code
eps = 1e-20
class Gaussian:
def __init__(self, mu, rho):
self.mu = mu
self.rho = rho
self.normal = torch.distributions.Normal(0,1)
@property
def sigma(self):
return torch.log1p(torch.exp(self.rho))
def sample(self):
epsilon = self.normal.sample(self.rho.size()).to(DEVICE)
return self.mu + self.sigma * epsilon
def log_prob(self, input):
return (-math.log(math.sqrt(2 * math.pi)) - torch.log(self.sigma+eps) - ((input - self.mu) ** 2) / (2 * self.sigma ** 2)).sum()
class GaussianPrior:
def __init__(self,mu,sigma):
self.mu = mu
self.sigma = sigma
def log_prob(self,input):
return (-math.log(math.sqrt(2 * math.pi)) - torch.log(self.sigma) - ((input - self.mu) ** 2) / (2 * self.sigma ** 2)).sum()
class BayesianLinear(nn.Module):
def __init__(self, n_input, n_output, sigma1):
super().__init__()
self.n_input = n_input
self.n_output = n_output
self.w_mu = nn.Parameter(torch.Tensor(n_output,n_input).normal_(0,math.sqrt(2/n_input)))
self.w_rho = nn.Parameter(torch.Tensor(n_output, n_input).uniform_(-2.253,-2.252))
self.w = Gaussian(self.w_mu, self.w_rho)
self.b_mu = nn.Parameter(torch.Tensor(n_output).normal_(0,math.sqrt(2/n_input)))
self.b_rho = nn.Parameter(torch.Tensor(n_output).uniform_(-2.253,-2.252))
self.b = Gaussian(self.b_mu, self.b_rho)
#Prior: Gaussian
self.w_prior = GaussianPrior(0,sigma1)
self.b_prior = GaussianPrior(0,sigma1)
self.log_prior = 0
self.log_variational_posterior= 0
self.sigma_mean = 0
self.sigma_std = 0
def forward(self, input, sample=False):
if self.training or sample:
w = self.w.sample()
b = self.b.sample()
else:
w = self.w_mu
b = self.b_mu
self.log_prior = self.w_prior.log_prob(w) + self.b_prior.log_prob(b)
self.log_variational_posterior = self.w.log_prob(w) + self.b.log_prob(b)
self.sigma_mean = self.w.sigma.mean()
self.sigma_std = self.w.sigma.std()
return F.linear(input, w, b)
class BayesianConv2D(nn.Module):
def __init__(self, in_channels, out_channels, sigma1, kernel_size=3, stride=1, padding=1):
super().__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.w_mu = nn.Parameter(torch.Tensor(out_channels,in_channels, kernel_size, kernel_size).normal_(0,math.sqrt(2/(out_channels*in_channels*kernel_size*kernel_size))))
self.w_rho = nn.Parameter(torch.Tensor(out_channels, in_channels, kernel_size, kernel_size).uniform_(-2.253,-2.252))
self.w = Gaussian(self.w_mu, self.w_rho)
# check whether bias is needed
# prior: Gaussian
self.w_prior = GaussianPrior(0,sigma1)
self.log_prior = 0
self.log_variational_posterior = 0
def forward(self, input, sample=False):
if self.training or sample:
w = self.w.sample()
else:
w = self.w_mu
self.log_prior = self.w_prior.log_prob(w)
self.log_variational_porsterior = self.w.log_prob(w)
return F.conv2d(input, w, bias=None, stride=self.stride, padding=self.padding)
def BayesianConv3x3(in_channels, out_channels, sigma1, stride=1):
return BayesianConv2D(in_channels, out_channels, sigma1, kernel_size=3,stride=stride, padding=1)
class TLU(nn.Module):
def __init__(self, num_features):
super().__init__()
self.num_features = num_features
self.tau = nn.parameter.Parameter(torch.Tensor(1,num_features,1,1), requires_grad=True)
self.reset_parameters()
def reset_parameters(self):
nn.init.kaiming_normal_(self.tau)
#nn.init.zeros_(self.tau)
def forward(self, x):
return torch.max(x, self.tau)
class FRN(nn.Module):
def __init__(self, num_features, eps=1e-6, is_eps_learnable=False):
super().__init__()
self.num_features = num_features
self.init_eps = eps
self.is_eps_learnable = is_eps_learnable
self.weight = nn.parameter.Parameter(torch.Tensor(1, num_features, 1, 1), requires_grad=True)
self.bias = nn.parameter.Parameter(torch.Tensor(1,num_features, 1, 1), requires_grad=True)
if is_eps_learnable:
self.eps = nn.Parameter(torch.Tensor(1))
else:
self.eps = torch.tensor(eps)
self.reset_parameters()
def reset_parameters(self):
nn.init.kaiming_normal_(self.weight)
nn.init.kaiming_normal_(self.bias)
if self.is_eps_learnable:
nn.init.constant_(self.eps, self.init_eps)
def forward(self,x):
nu2 = x.pow(2).mean(dim=[2,3], keepdim=True)
x = x * torch.rsqrt(nu2 + self.eps.abs())
x = self.weight * x + self.bias
return x
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, sigma1, stride=1, downsample=None):
super().__init__()
self.conv1 = BayesianConv3x3(in_channels, out_channels, sigma1, stride)
self.frn1 = nn.BatchNorm2d(out_channels)
self.tlu1 = nn.ReLU(inplace=True)
self.conv2 = BayesianConv3x3(out_channels, out_channels, sigma1)
self.frn2 = nn.BatchNorm2d(out_channels)
self.tlu2 = nn.ReLU(inplace=True)
self.downsample = downsample
self.log_prior = 0
self.log_variational_posterior = 0
self.sigma_mean = 0
self.sigma_std = 0
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.frn1(out)
out = self.tlu1(out)
out = self.conv2(out)
out = self.frn2(out)
if self.downsample:
residual = self.downsample(x)
out += residual
out = self.tlu2(out)
self.log_prior = self.conv1.log_prior + self.conv2.log_prior
self.log_variational_posterior = self.conv1.log_variational_posterior + self.conv2.log_variational_posterior
para = torch.cat((self.conv1.w.sigma.flatten(), self.conv2.w.sigma.flatten()))
self.sigma_mean = para.mean()
self.sigma_std = para.std()
return out
class BayesianResNet14(nn.Module):
def __init__(self, block, sigma1, num_class=10):
super().__init__()
self.num_class = num_class
self.in_channels = 16
self.conv = BayesianConv3x3(3,16, sigma1)
self.frn = nn.BatchNorm2d(16)
self.tlu = nn.ReLU(inplace=True)
self.block1 = ResidualBlock(16,16,sigma1)
self.block2 = ResidualBlock(16,16,sigma1)
downsample1 = nn.Sequential(BayesianConv3x3(16,32,sigma1,2), nn.BatchNorm2d(32))
self.block3 = ResidualBlock(16,32,sigma1,2,downsample1)
self.block4 = ResidualBlock(32,32,sigma1)
downsample2 = nn.Sequential(BayesianConv3x3(32,64,sigma1,2), nn.BatchNorm2d(64))
self.block5 = ResidualBlock(32,64,sigma1,2,downsample2)
self.block6 = ResidualBlock(64,64,sigma1)
self.avg_pool = nn.AvgPool2d(8)
self.fc = BayesianLinear(64, num_class, sigma1)
def forward(self, x, sample=False):
out = self.conv(x)
out = self.frn(out)
out = self.tlu(out)
out = self.block1(out)
out = self.block2(out)
out = self.block3(out)
out = self.block4(out)
out = self.block5(out)
out = self.block6(out)
out = self.avg_pool(out)
out = out.view(out.size(0),-1)
out = F.softmax(self.fc(out, sample))
return out
def log_prior(self):
return self.conv.log_prior + self.block1.log_prior + self.block2.log_prior + self.block3.log_prior + self.block4.log_prior + self.block5.log_prior + self.block6.log_prior + self.fc.log_prior
def log_variational_posterior(self):
return self.conv.log_variational_posterior + self.block1.log_variational_posterior + self.block2.log_variational_posterior + self.block3.log_variational_posterior + self.block4.log_variational_posterior + self.block5.log_variational_posterior + self.block6.log_variational_posterior + self.fc.log_variational_posterior
def free_energy(self, input, target, batch_size, num_batches, n_samples, T):
outputs = torch.zeros(batch_size, self.num_class).to(DEVICE)
log_prior = torch.zeros(1).to(DEVICE)
log_variational_posterior = torch.zeros(1).to(DEVICE)
negative_log_likelihood = torch.zeros(1).to(DEVICE)
for i in range(n_samples):
output = self(input, sample=True)
outputs += output/n_samples
log_prior += self.log_prior()/n_samples
log_variational_posterior += self.log_variational_posterior()/n_samples
negative_log_likelihood += F.nll_loss(torch.log(output+eps), target, size_average=False)/n_samples
# new target function, not absorb T into prior
loss = (log_variational_posterior - log_prior / T) + negative_log_likelihood / T * num_batches
corrects = outputs.argmax(dim=1).eq(target).sum().item()
return loss, log_prior, log_variational_posterior, negative_log_likelihood, corrects
def write_weight_histograms(epoch):
writer.add_histogram('histogram/w1_mu', net.l1.w_mu, epoch)
writer.add_histogram('histogram/w1_rho', net.l1.w_rho, epoch)
writer.add_histogram('histogram/w2_mu', net.l2.w_mu, epoch)
writer.add_histogram('histogram/w2_rho', net.l2.w_rho, epoch)
writer.add_histogram('histogram/w3_mu', net.l3.w_mu, epoch)
writer.add_histogram('histogram/w3_rho', net.l3.w_rho, epoch)
def write_loss_scalars(epoch, loss, accuracy, log_prior, log_variational_posterior, negative_log_likelihood):
writer.add_scalar('logs/loss', loss, epoch)
writer.add_scalar('logs/accuracy', accuracy, epoch)
writer.add_scalar('logs/complexity', log_variational_posterior-log_prior, epoch)
writer.add_scalar('logs/negative_log_likelihood', negative_log_likelihood, epoch)
def write_test_scalar(epoch, loss, accuracy):
writer.add_scalar('logs/test_loss', loss,epoch)
writer.add_scalar('logs/test_accuracy', accuracy, epoch)
def write_sigma(epoch):
writer.add_scalar('sigma/block1', net.block1.sigma_mean,epoch)
writer.add_scalar('sigma/block2', net.block2.sigma_mean,epoch)
writer.add_scalar('sigma/block3', net.block3.sigma_mean,epoch)
writer.add_scalar('sigma/block4', net.block4.sigma_mean,epoch)
writer.add_scalar('sigma/block5', net.block5.sigma_mean,epoch)
writer.add_scalar('sigma/block6', net.block6.sigma_mean,epoch)
writer.add_scalar('sigma/fc', net.fc.sigma_mean,epoch)
writer.add_scalar('sigmastd/block1', net.block1.sigma_std,epoch)
writer.add_scalar('sigmastd/block2', net.block2.sigma_std,epoch)
writer.add_scalar('sigmastd/block3', net.block3.sigma_std,epoch)
writer.add_scalar('sigmastd/block4', net.block4.sigma_std,epoch)
writer.add_scalar('sigmastd/block5', net.block5.sigma_std,epoch)
writer.add_scalar('sigmastd/block6', net.block6.sigma_std,epoch)
writer.add_scalar('sigmastd/fc', net.fc.sigma_std,epoch)
###Output
_____no_output_____
###Markdown
Train and test
###Code
def train(net, optimizer, epoch, trainLoader, batchSize, nSamples ,T):
net.train()
num_batches_train = len(trainLoader)
# if epoch == 0:
# write_weight_histograms(epoch)
for batch_idx, (data, target) in enumerate(tqdm(trainLoader)):
data, target = data.to(DEVICE), target.to(DEVICE)
net.zero_grad()
loss, log_prior, log_variational_posterior, negative_log_likelihood, corrects = net.free_energy(data, target, batchSize, num_batches_train, nSamples,T)
loss.backward()
optimizer.step()
accuracy = corrects / batchSize
# write_loss_scalars(epoch, loss, accuracy, log_prior, log_variational_posterior, negative_log_likelihood)
# write_weight_histograms(epoch)
# write_sigma(epoch)
return accuracy, loss
def test_duringTrain(net, epoch, testLoader, batchSize, nSamples, T):
net.eval()
accuracy = 0
n_corrects = 0
Loss = 0
num_batches_test = len(testLoader)
n_test = batchSize * num_batches_test
outputs = torch.zeros(n_test, 10).to(DEVICE)
correct = torch.zeros(n_test).to(DEVICE)
with torch.no_grad():
for i, (data, target) in enumerate(testLoader):
data, target = data.to(DEVICE), target.to(DEVICE)
for j in range(nSamples):
output = net(data, sample=True)
outputs[i*batchSize:batchSize*(i+1), :] += output/nSamples
Loss += F.nll_loss(torch.log(output), target, size_average=False)/nSamples
# loss is log likelihood
correct[i*batch_size:batchSize*(i+1)] = (outputs[i*batchSize:batchSize*(i+1), :]).argmax(1).eq(target)
accuracy = correct.mean()
#write_test_scalar(epoch, Loss, accuracy)
return accuracy, Loss
def test(net, testLoader, batchSize, nSamples,T, num_class=10):
# update ECE
net.eval()
accuracy = 0
n_corrects = 0
Loss = 0
num_batches_test = len(testLoader)
n_test = batchSize * num_batches_test
outputs = torch.zeros(n_test, num_class).to(DEVICE)
correct = torch.zeros(n_test).to(DEVICE)
target_all = torch.zeros(n_test).to(DEVICE)
M = 10
boundary = ((torch.tensor(range(0,M))+1)/10).view(1,-1)
boundary = boundary.repeat(batchSize, 1).to(DEVICE)
acc_Bm_sum = torch.zeros(M).to(DEVICE)
conf_Bm_sum = torch.zeros(M).to(DEVICE)
Bm = torch.zeros(M).to(DEVICE)
with torch.no_grad():
for i, (data, target) in enumerate(testLoader):
data, target = data.to(DEVICE), target.to(DEVICE)
target_all[i*batchSize:batchSize*(i+1)] = target
for j in range(nSamples):
output = net(data, sample=True)
outputs[i*batchSize:batchSize*(i+1), :] += output/nSamples
Loss += F.nll_loss(torch.log(output), target, size_average=False)/nSamples
# loss is log likelihood
correct[i*batchSize:batchSize*(i+1)] = (outputs[i*batchSize:batchSize*(i+1), :]).argmax(1).eq(target)
otemp =outputs[i*batchSize:batchSize*(i+1), :]
p_i,_ = otemp.max(dim=1, keepdims=True)
B = (p_i.le(boundary)*1).argmax(dim=1)
acc_i = otemp.argmax(1).eq(target)
for m in range(M):
is_m = B.eq(m)
Bm[m] += is_m.sum()
acc_Bm_sum[m] += torch.sum(acc_i * is_m)
conf_Bm_sum[m] += torch.sum(p_i.flatten() * is_m)
accuracy = correct.mean()
ROCAUC = roc_auc_score(target_all.cpu(), outputs.cpu(), multi_class='ovr')
ECE = (acc_Bm_sum - conf_Bm_sum).abs().sum()/(n_test)
temp = (acc_Bm_sum - conf_Bm_sum)/Bm
temp[temp!=temp]=0
MCE,_ = temp.abs().max(0)
return accuracy, Loss, ECE, MCE, ROCAUC, output
def test_MoG(net_list, testLoader, batchSize, nSamples,T, num_class=10):
# update ECE
for net in net_list:
net.eval()
accuracy = 0
n_corrects = 0
Loss = 0
num_batches_test = len(testLoader)
n_test = batchSize * num_batches_test
outputs = torch.zeros(n_test, num_class).to(DEVICE)
correct = torch.zeros(n_test).to(DEVICE)
target_all = torch.zeros(n_test).to(DEVICE)
n_list = len(net_list)
M = 10
boundary = ((torch.tensor(range(0,M))+1)/10).view(1,-1)
boundary = boundary.repeat(batchSize, 1).to(DEVICE)
acc_Bm_sum = torch.zeros(M).to(DEVICE)
conf_Bm_sum = torch.zeros(M).to(DEVICE)
Bm = torch.zeros(M).to(DEVICE)
with torch.no_grad():
for i, (data, target) in enumerate(testLoader):
data, target = data.to(DEVICE), target.to(DEVICE)
target_all[i*batchSize:batchSize*(i+1)] = target
for k, net in enumerate(net_list):
for j in range(nSamples):
output = net(data, sample=True)
outputs[i*batchSize:batchSize*(i+1), :] += output/(nSamples*n_list)
Loss += F.nll_loss(torch.log(output), target, size_average=False)/(nSamples*n_list)
# loss is log likelihood
correct[i*batchSize:batchSize*(i+1)] = (outputs[i*batchSize:batchSize*(i+1), :]).argmax(1).eq(target)
otemp = outputs[i*batchSize:batchSize*(i+1), :]
p_i,_ = otemp.max(dim=1, keepdims=True)
B = (p_i.le(boundary)*1).argmax(dim=1)
acc_i = otemp.argmax(1).eq(target)
for m in range(M):
is_m = B.eq(m)
Bm[m] += is_m.sum()
acc_Bm_sum[m] += torch.sum(acc_i * is_m)
conf_Bm_sum[m] += torch.sum(p_i.flatten() * is_m)
accuracy = correct.mean()
ROCAUC = roc_auc_score(target_all.cpu(), outputs.cpu(), multi_class='ovr')
ECE = (acc_Bm_sum - conf_Bm_sum).abs().sum()/(n_test)
temp = (acc_Bm_sum - conf_Bm_sum)/Bm
temp[temp!=temp]=0
MCE,_ = temp.abs().max(0)
return accuracy, Loss, ECE, MCE, ROCAUC, output
def cal_entropy(p):
logP = p.clone()
logP[p==0]=1
logP = torch.log(logP)
return (-logP*p).sum(dim=1)
def OOD_test(net, oodLoader, inDis_output, batchSize, nSamples, T, num_class=10):
net.eval()
num_batches_test = len(oodLoader)
n_test = batchSize * num_batches_test
n_inDis = len(inDis_output)
outputs = torch.zeros(n_test, num_class).to(DEVICE)
target_all = torch.zeros(n_test+n_inDis)
target_all[n_test:] = 1
score1 = torch.zeros(n_test+n_inDis)
score2 = torch.zeros(n_test+n_inDis)
with torch.no_grad():
for i, (data, target) in enumerate(oodLoader):
data = data.to(DEVICE)
for j in range(nSamples):
output = net(data,sample=True)
outputs[i*batch_size:batchSize*(i+1), :] += output/nSamples
entropy = cal_entropy(outputs)
entropy_ave = entropy.mean()
entropy_std = entropy.std()
score1[:n_test],_ = outputs.max(dim=1)
score1[n_test:],_ = inDis_output.max(dim=1)
score2[:n_test] = entropy_ave
score2[n_test:] = cal_entropy(inDis_output).mean()
L2D = (torch.square(outputs-0.1).sum(dim=1)).mean()
ROCAUC1 = roc_auc_score(target_all, score1, multi_class='ovr', average='weighted')
ROCAUC2 = roc_auc_score(target_all, score2, multi_class='ovr', average='weighted')
return entropy_ave, entropy_std, L2D, ROCAUC1, ROCAUC2
def OOD_test_MoG(net_list, oodLoader, inDis_output, batchSize, nSamples, T, num_class=10):
for net in net_list:
net.eval()
num_batches_test = len(oodLoader)
n_test = batchSize * num_batches_test
n_inDis = len(inDis_output)
n_list = len(net_list)
outputs = torch.zeros(n_test, num_class).to(DEVICE)
target_all = torch.zeros(n_test+n_inDis)
target_all[n_test:] = 1
score1 = torch.zeros(n_test+n_inDis)
score2 = torch.zeros(n_test+n_inDis)
with torch.no_grad():
for i, (data, target) in enumerate(oodLoader):
data = data.to(DEVICE)
for k, net in enumerate(net_list):
for j in range(nSamples):
output = net(data,sample=True)
outputs[i*batch_size:batchSize*(i+1), :] += output/(nSamples*n_list)
entropy = cal_entropy(outputs)
entropy_ave = entropy.mean()
entropy_std = entropy.std()
score1[:n_test],_ = outputs.max(dim=1)
score1[n_test:],_ = inDis_output.max(dim=1)
score2[:n_test] = entropy_ave
score2[n_test:] = cal_entropy(inDis_output).mean()
L2D = (torch.square(outputs-0.1).sum(dim=1)).mean()
ROCAUC1 = roc_auc_score(target_all, score1, multi_class='ovr', average='weighted')
ROCAUC2 = roc_auc_score(target_all, score2, multi_class='ovr', average='weighted')
return entropy_ave, entropy_std, L2D, ROCAUC1, ROCAUC2
def update_lr(optimizer,lr):
for param_group in optimizer.param_groups:
param_group['lr']= lr
def reset_net(net, pretrained_net):
net.conv.w_mu.data.copy_(pretrained_net.conv.w_mu.data)
net.block1.conv1.w_mu.data.copy_(pretrained_net.block1.conv1.w_mu.data)
net.block1.conv2.w_mu.data.copy_(pretrained_net.block1.conv2.w_mu.data)
net.block2.conv1.w_mu.data.copy_(pretrained_net.block2.conv1.w_mu.data)
net.block2.conv2.w_mu.data.copy_(pretrained_net.block2.conv2.w_mu.data)
net.block3.conv1.w_mu.data.copy_(pretrained_net.block3.conv1.w_mu.data)
net.block3.conv2.w_mu.data.copy_(pretrained_net.block3.conv2.w_mu.data)
net.block4.conv1.w_mu.data.copy_(pretrained_net.block4.conv1.w_mu.data)
net.block4.conv2.w_mu.data.copy_(pretrained_net.block4.conv2.w_mu.data)
net.block5.conv1.w_mu.data.copy_(pretrained_net.block5.conv1.w_mu.data)
net.block5.conv2.w_mu.data.copy_(pretrained_net.block5.conv2.w_mu.data)
net.block6.conv1.w_mu.data.copy_(pretrained_net.block6.conv1.w_mu.data)
net.block6.conv2.w_mu.data.copy_(pretrained_net.block6.conv2.w_mu.data)
net.fc.w_mu.data.copy_(pretrained_net.fc.w_mu.data)
net.fc.b_mu.data.copy_(pretrained_net.fc.b_mu.data)
return net
###Output
_____no_output_____
###Markdown
PreTrained Model
###Code
# nonBayesian Network
class myLinear(nn.Module):
def __init__(self, n_input, n_output, sigma1):
super().__init__()
self.n_input = n_input
self.n_output = n_output
#self.T = T
#self.sigma1 = sigma1
self.w_mu = nn.Parameter(torch.Tensor(n_output,n_input).normal_(0,math.sqrt(2/n_input))) #todo
self.b_mu = nn.Parameter(torch.Tensor(n_output).normal_(0,math.sqrt(2/n_input)))
def forward(self, input, sample=False):
w = self.w_mu
b = self.b_mu
return F.linear(input, w, b)
class myConv2D(nn.Module):
def __init__(self, in_channels, out_channels, sigma1, kernel_size=3, stride=1, padding=1):
super().__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.w_mu = nn.Parameter(torch.Tensor(out_channels,in_channels, kernel_size, kernel_size))
self.reset_para()
def reset_para(self):
nn.init.kaiming_uniform_(self.w_mu, a=math.sqrt(5))
def forward(self, input, sample=False):
w = self.w_mu
return F.conv2d(input, w, bias=None, stride=self.stride, padding=self.padding)
def myConv3x3(in_channels, out_channels, sigma1, stride=1):
return myConv2D(in_channels, out_channels, sigma1, kernel_size=3,stride=stride, padding=1)
class myResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, sigma1, stride=1, downsample=None):
super().__init__()
self.conv1 = myConv3x3(in_channels, out_channels, sigma1, stride)
self.frn1 = nn.BatchNorm2d(out_channels)
self.tlu1 = nn.ReLU(inplace=True)
self.conv2 = myConv3x3(out_channels, out_channels, sigma1)
self.frn2 = nn.BatchNorm2d(out_channels)
self.tlu2 = nn.ReLU(inplace=True)
self.downsample = downsample
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.frn1(out)
out = self.tlu1(out)
out = self.conv2(out)
out = self.frn2(out)
if self.downsample:
residual = self.downsample(x)
out += residual
out = self.tlu2(out)
return out
class myResNet14(nn.Module):
def __init__(self, sigma1, num_class=10):
super().__init__()
self.in_channels = 16
self.conv = myConv3x3(3,16, sigma1)
self.frn = nn.BatchNorm2d(16)
self.tlu = nn.ReLU(inplace=True)
self.block1 = myResidualBlock(16,16,sigma1)
self.block2 = myResidualBlock(16,16,sigma1)
downsample1 = nn.Sequential(myConv3x3(16,32,sigma1,2), nn.BatchNorm2d(32))
self.block3 = myResidualBlock(16,32,sigma1,2,downsample1)
self.block4 = myResidualBlock(32,32,sigma1)
downsample2 = nn.Sequential(myConv3x3(32,64,sigma1,2), nn.BatchNorm2d(64))
self.block5 = myResidualBlock(32,64,sigma1,2,downsample2)
self.block6 = myResidualBlock(64,64,sigma1)
self.avg_pool = nn.AvgPool2d(8)
self.fc = myLinear(64, num_class, sigma1)
def forward(self, x, sample=False):
out = self.conv(x)
out = self.frn(out)
out = self.tlu(out)
out = self.block1(out)
out = self.block2(out)
out = self.block3(out)
out = self.block4(out)
out = self.block5(out)
out = self.block6(out)
out = self.avg_pool(out)
out = out.view(out.size(0),-1)
out = F.softmax(self.fc(out, sample))
return out
def free_energy(self, input, target, batch_size, num_batches, n_samples, T):
negative_log_likelihood = torch.zeros(1).to(DEVICE)
for i in range(n_samples):
output = self(input, sample=True)
negative_log_likelihood += F.nll_loss(torch.log(output+eps), target, size_average=False)/n_samples
# new target function, not absorb T into prior
loss = negative_log_likelihood / T * num_batches
corrects = output.argmax(dim=1).eq(target).sum().item()
return loss, corrects,0,0,0
pretrained_net = myResNet14(1,num_class=100).to(DEVICE)
with open(os.path.join(path, "pretrained/net2.pkl"), "rb") as f:
pretrained_net.load_state_dict (torch.load(f))
testAcc, testLoss, testECE1, testMCE1, AUCROC1, output = test(pretrained_net, test_loader, 128, 10, 1, num_class=100)
testAcc
###Output
_____no_output_____
###Markdown
0.4730; 0.4696; 0.4601 initialise with 3 SGD solutions
###Code
batch_size = 128
n_samples = 1
T_list = torch.pow(10,-1*torch.tensor(range(0,35,5))/10).to(DEVICE)
sigma = torch.sqrt(torch.tensor(1))
epochs = 1
max_lr = 0.0001
curr_lr = 0.0001
MoG_net = []
###Output
_____no_output_____
###Markdown
testAcc for pretrained net0.82630.83210.8229
###Code
for t,T in enumerate(T_list):
for i in range(3):
print(i)
pretrained_net = myResNet14(1,num_class=100).to(DEVICE)
with open(os.path.join(path,f"pretrained/net{i}.pkl"), "rb") as f:
pretrained_net.load_state_dict (torch.load(f))
net = BayesianResNet14(ResidualBlock, sigma, num_class=100).to(DEVICE)
net = reset_net(net, pretrained_net)
optimizer = optim.Adam(net.parameters(),lr=curr_lr)
for epoch in range(epochs):
trainAcc, trainLoss = train(net, optimizer, epoch+i*epochs, training_loader, batch_size, n_samples,T)
curr_lr = max_lr/2 * (1+math.cos((epoch)/epochs*math.pi))
update_lr(optimizer,curr_lr)
with open(os.path.join(path,f"pretrainedCosine/net{t}{i}.pt"), "wb") as f:
torch.save(net.state_dict(),f)
testAcc_MoG = torch.zeros(7,).to(DEVICE)
testLoss_MoG = torch.zeros(7,).to(DEVICE)
testECE_MoG = torch.zeros(7,).to(DEVICE)
testROCAUC_MoG = torch.zeros(7,).to(DEVICE)
entropy_ave_MoG = torch.zeros(7,).to(DEVICE)
entropy_std_MoG = torch.zeros(7,).to(DEVICE)
L2D_MoG = torch.zeros(7,).to(DEVICE)
ROCAUC1_MoG = torch.zeros(7,).to(DEVICE)
ROCAUC2_MoG = torch.zeros(7,).to(DEVICE)
for t,T in enumerate(T_list):
print(t)
MoG_net = []
for i in range(3):
net = BayesianResNet14(ResidualBlock, sigma, num_class=100).to(DEVICE)
with open(os.path.join(path,f"pretrainedCosine/net{t}{i}.pt"), "rb") as f:
net.load_state_dict(torch.load(f))
MoG_net.append(net)
testAcc_MoG[t], testLoss_MoG[t], testECE_MoG[t], _, testROCAUC_MoG[t], out =test_MoG(MoG_net, test_loader, batch_size, 17,T, num_class=100)
entropy_ave_MoG[t], entropy_std_MoG[t], L2D_MoG[t], ROCAUC1_MoG[t], ROCAUC2_MoG[t] = OOD_test_MoG(MoG_net, svhn_loader, out, batch_size, 17, T, num_class=100)
with open(os.path.join(path, "results/test_accuracy.pt"), "wb") as f:
torch.save(testAcc_MoG.cpu(),f)
with open(os.path.join(path,"results/test_loss.pt"), "wb") as f:
torch.save(testLoss_MoG.cpu(),f)
with open(os.path.join(path,"results/testECE.pt"), "wb") as f:
torch.save(testECE_MoG.cpu(),f)
with open(os.path.join(path,"results/entropy_ave.pt"), "wb") as f:
torch.save(entropy_ave_MoG.cpu(),f)
with open(os.path.join(path,"results/entropy_std.pt"), "wb") as f:
torch.save(entropy_std_MoG.cpu(),f)
with open(os.path.join(path,"results/L2D.pt"), "wb") as f:
torch.save(L2D_MoG.cpu(),f)
with open(os.path.join(path,"results/test_ROCAUC.pt"), "wb") as f:
torch.save(testROCAUC_MoG.cpu(),f)
with open(os.path.join(path,"results/ood_ROCAUC1.pt"), "wb") as f:
torch.save(ROCAUC1_MoG.cpu(),f)
with open(os.path.join(path,"results/ood_ROCAUC2.pt"), "wb") as f:
torch.save(ROCAUC2_MoG.cpu(),f)
ROCAUC1_MoG
testECE_MoG
testAcc_MoG
entropy_ave_MoG
###Output
_____no_output_____ |
notebooks/QGL2 AllXY.ipynb | ###Markdown
Compiling a QGL2 AllXY and plotting the output Imports
###Code
from pyqgl2.main import compile_function, qgl2_compile_to_hardware
from pyqgl2.test_cl import create_default_channelLibrary
from pyqgl2.qreg import QRegister
from QGL import plot_pulse_files, ChannelLibrary
###Output
_____no_output_____
###Markdown
Should sequences be compiled to hardware, or just to QGL?
###Code
toHW = True
###Output
_____no_output_____
###Markdown
Create a test ChannelLibrary; alternatively, load a library you already defined
###Code
create_default_channelLibrary(toHW, True)
# Or create a new ChannelLibrary with channels
# cl = ChannelLibrary(db_resource_name=":memory:")
# q1 = cl.new_qubit('q1')
# Most calls required label and address
# aps2_1 = cl.new_APS2("BBNAPS1", address="192.168.5.101")
# aps2_2 = cl.new_APS2("BBNAPS2", address="192.168.5.102")
# dig_1 = cl.new_X6("X6_1", address=0)
# Label, instrument type, address, and an additional config parameter
# h1 = cl.new_source("Holz1", "HolzworthHS9000", "HS9004A-009-1", power=-30)
# h2 = cl.new_source("Holz2", "HolzworthHS9000", "HS9004A-009-2", power=-30)
# Qubit q1 is controlled by AWG aps2_1, and uses microwave source h1
# cl.set_control(q1, aps2_1, generator=h1)
# Qubit q1 is measured by AWG aps2_2 and digitizer dig_1, and uses microwave source h2
# cl.set_measure(q1, aps2_2, dig_1.ch(1), generator=h2)
# The AWG aps2_1 is the master AWG, and distributes a synchronization trigger on its second marker channel
# cl.set_master(aps2_1, aps2_1.ch("m2"))
# cl.commit()
###Output
_____no_output_____
###Markdown
Create needed qubit(s)
###Code
# For QGL2, use a QRegister, not a QGL Qubit
q = QRegister(1)
###Output
_____no_output_____
###Markdown
Compile to QGL1 To turn on debug output, uncomment the next 4 lines
###Code
#from pyqgl2.ast_util import NodeError
#from pyqgl2.debugmsg import DebugMsg
#DebugMsg.set_level(1)
#NodeError.MUTE_ERR_LEVEL = NodeError.NODE_ERROR_NONE
# Insert proper path to QGL2 source and name of qgl2main if not so marked
# Here we compile the named function in the named file from QGL2 to QGL1 and return the new function
qgl1MainFunc = compile_function("../src/python/qgl2/basic_sequences/AllXY.py", "AllXY", (q,))
###Output
_____no_output_____
###Markdown
Generate pulse sequences
###Code
# Now run the QGL1 function, producing a list of sequences
seqs = qgl1MainFunc()
###Output
_____no_output_____
###Markdown
Optionally compile to machine instructions
###Code
if toHW:
from IPython.display import display
metaFileName = qgl2_compile_to_hardware(seqs, "AllXY/AllXY")
print(f"Generated sequence details in '{metaFileName}'")
# Plot the sequences
p = plot_pulse_files(metaFileName)
# Explicitly display the graph which fails to auto-draw in some cases
display(p)
else:
from QGL.Scheduler import schedule
from IPython.lib.pretty import pretty
print(pretty(schedule(seqs)))
###Output
_____no_output_____ |
original_files/.ipynb_checkpoints/train-py-checkpoint.ipynb | ###Markdown
Set resnet models
###Code
import torch.nn as nn
import torch.nn.functional as F
# code from https://github.com/KellerJordan/ResNet-PyTorch-CIFAR10/blob/master/model.py
class IdentityPadding(nn.Module):
def __init__(self, in_channels, out_channels, stride):
super(IdentityPadding, self).__init__()
self.pooling = nn.MaxPool2d(1, stride=stride)
self.add_channels = out_channels - in_channels
def forward(self, x):
out = F.pad(x, (0, 0, 0, 0, 0, self.add_channels))
out = self.pooling(out)
return out
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, down_sample=False):
super(ResidualBlock, self).__init__()
self.down_sample = down_sample
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.stride = stride
self.dropout = nn.Dropout(0.2)
if down_sample:
self.down_sample = IdentityPadding(in_channels, out_channels, stride)
else:
self.down_sample = None
def forward(self, x):
shortcut = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.dropout(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.dropout(out)
if self.down_sample is not None:
shortcut = self.down_sample(x)
out += shortcut
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, num_layers, block, num_classes=7):
super(ResNet, self).__init__()
self.num_layers = num_layers
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
# feature map size = 112x112x16
self.layers_2n = self.get_layers(block, 16, 16, stride=1)
# feature map size = 56x56x32
self.layers_4n = self.get_layers(block, 16, 32, stride=2)
# feature map size = 28x28x64
self.layers_6n = self.get_layers(block, 32, 64, stride=2)
# output layers
self.avg_pool = nn.MaxPool2d(28, stride=1)
self.fc_out = nn.Linear(64, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out',
nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def get_layers(self, block, in_channels, out_channels, stride):
if stride == 2:
down_sample = True
else:
down_sample = False
layers_list = nn.ModuleList(
[block(in_channels, out_channels, stride, down_sample)])
for _ in range(self.num_layers - 1):
layers_list.append(block(out_channels, out_channels))
return nn.Sequential(*layers_list)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.layers_2n(x)
x = self.layers_4n(x)
x = self.layers_6n(x)
x = self.avg_pool(x)
x = x.view(x.size(0), -1)
x = self.fc_out(x)
return x
def resnet():
block = ResidualBlock
# total number of layers if 6n + 2. if n is 5 then the depth of network is 32.
model = ResNet(3, block)
return model
###Output
_____no_output_____
###Markdown
SE Resnet from https://github.com/moskomule/senet.pytorch/blob/master/senet/se_resnet.py
###Code
from torch.hub import load_state_dict_from_url
from torchvision.models import ResNet
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction, bias=False),
nn.ReLU(inplace=True),
nn.Linear(channel // reduction, channel, bias=False),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
return x * y.expand_as(x)
def conv3x3(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
class SEBasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes, 1)
self.bn2 = nn.BatchNorm2d(planes)
self.se = SELayer(planes, reduction)
self.downsample = downsample
self.stride = stride
self.dropout = nn.Dropout(p=0.5)
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.se(out)
out = self.dropout(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class SEBottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.se = SELayer(planes * 4, reduction)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out = self.se(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
def se_resnet18(num_classes=7):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [2, 2, 2, 2], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet34(num_classes=7):
"""Constructs a ResNet-34 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet50(num_classes=7, pretrained=False):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
if pretrained:
model.load_state_dict(load_state_dict_from_url(
"https://github.com/moskomule/senet.pytorch/releases/download/archive/seresnet50-60a8950a85b2b.pkl"))
return model
def se_resnet101(num_classes=7):
"""Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 23, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet152(num_classes=7):
"""Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 8, 36, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
###Output
_____no_output_____
###Markdown
**Import files**
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import torch
import torch.optim as optim
from torch.optim import lr_scheduler
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import argparse
from tensorboardX import SummaryWriter
###Output
_____no_output_____
###Markdown
Import dataset
###Code
import pickle
import random
from PIL import Image
print('==> Preparing data..')
cropped_image_list = []
label_list = []
img_path = '../images/'
with open('result.pickle', 'rb') as f:
data = pickle.load(f)
transforms_train = transforms.Compose([
transforms.RandomCrop(112, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
transforms_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
#number of train set and test set
num_data=len(data.index)
num_test_data = int(num_data/10)
num_train_data = num_data - num_test_data
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
class_dict = {'JM': 0,'JN':1, 'JH':2, 'JK':3, 'RM':4, 'VV':5, 'SG':6 }
for i in data.index:
img = Image.open(img_path+data[0][i])
cropped_image_list.append(img)
label = class_dict[data[1][i]]
label_list.append(label)
#dataset making by random select
dataset_train = []
dataset_test = []
#random select test data index
rand = np.random.choice(np.arange(num_data), num_test_data, replace=False)
# class dataset() and __init__() looks good for calling and managing self variables
# but This time, tried to make simple without class
count_test=0
count_train=0
for i in np.arange(num_data):
if i in rand:
img = transforms_test(cropped_image_list[i])
dataset_test.append([img,label_list[i]])
else:
img = transforms_train(cropped_image_list[i])
dataset_train.append([img,label_list[i]])
#dataset_test = np.asarray(dataset_test)
#dataset_train = np.asarray(dataset_train)
###Output
==> Preparing data..
###Markdown
Focal losshttps://github.com/foamliu/InsightFace-v2/blob/master/focal_loss.py
###Code
class FocalLoss(nn.Module):
def __init__(self, gamma=0):
super(FocalLoss, self).__init__()
self.gamma = gamma
self.ce = torch.nn.CrossEntropyLoss()
def forward(self, input, target):
logp = self.ce(input, target)
p = torch.exp(-logp)
loss = (1 - p) ** self.gamma * logp
return loss.mean()
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
exp_id=0
lr = 0.1
batch_size = 16
batch_size_test=2
num_worker=1
resume = None
logdir = './output/'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
train_loader = DataLoader(dataset_train, batch_size=batch_size,
shuffle=True, num_workers=num_worker)
test_loader = DataLoader(dataset_test, batch_size=batch_size_test,
shuffle=False, num_workers=num_worker)
# bts class
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
print('==> Making model..')
#net = resnet()
net = se_resnet18()
net = net.to(device)
num_params = sum(p.numel() for p in net.parameters() if p.requires_grad)
print('The number of parameters of model is', num_params)
# print(net)
if resume is not None:
checkpoint = torch.load('../save_model/' + str(exp_id))
net.load_state_dict(checkpoint['net'])
#criterion = nn.CrossEntropyLoss()
criterion = FocalLoss(gamma=2.0).to(device)
#criterion = nn.TripletMarginLoss(margin=1.0, p=2.0,
# eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')
optimizer = optim.Adam(net.parameters(), lr=0.001, weight_decay=1e-4)
decay_epoch = [2400, 3600]
step_lr_scheduler = lr_scheduler.MultiStepLR(optimizer,
milestones=decay_epoch, gamma=0.1)
writer = SummaryWriter(logdir)
def train(epoch, global_steps):
net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(train_loader):
global_steps += 1
step_lr_scheduler.step()
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('train epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(train_loader), train_loss/(batch_idx+1), acc))
writer.add_scalar('./log/train error', 100 - acc, global_steps)
return global_steps
def test(epoch, best_acc, global_steps):
net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('test epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(test_loader), test_loss/(batch_idx+1), acc))
writer.add_scalar('./log/test error', 100 - acc, global_steps)
if acc > best_acc:
print('==> Saving model..')
state = {
'net': net.state_dict(),
'acc': acc,
'epoch': epoch,
}
if not os.path.isdir('../save_model'):
os.mkdir('../save_model')
torch.save(state, '../save_model/ckpt.pth')
best_acc = acc
return best_acc
if __name__=='__main__':
best_acc = 0
epoch = 0
global_steps = 0
if resume is not None:
test(epoch=0, best_acc=0)
else:
while True:
epoch += 1
global_steps = train(epoch, global_steps)
best_acc = test(epoch, best_acc, global_steps)
print('best test accuracy is ', best_acc)
if global_steps >= 4800:
break
# Any results you write to the current directory are saved as output.
###Output
==> Making model..
The number of parameters of model is 11267143
###Markdown
Set resnet models
###Code
import torch.nn as nn
import torch.nn.functional as F
# code from https://github.com/KellerJordan/ResNet-PyTorch-CIFAR10/blob/master/model.py
class IdentityPadding(nn.Module):
def __init__(self, in_channels, out_channels, stride):
super(IdentityPadding, self).__init__()
self.pooling = nn.MaxPool2d(1, stride=stride)
self.add_channels = out_channels - in_channels
def forward(self, x):
out = F.pad(x, (0, 0, 0, 0, 0, self.add_channels))
out = self.pooling(out)
return out
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, down_sample=False):
super(ResidualBlock, self).__init__()
self.down_sample = down_sample
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.stride = stride
self.dropout = nn.Dropout(0.2)
if down_sample:
self.down_sample = IdentityPadding(in_channels, out_channels, stride)
else:
self.down_sample = None
def forward(self, x):
shortcut = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.dropout(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.dropout(out)
if self.down_sample is not None:
shortcut = self.down_sample(x)
out += shortcut
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, num_layers, block, num_classes=7):
super(ResNet, self).__init__()
self.num_layers = num_layers
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
# feature map size = 112x112x16
self.layers_2n = self.get_layers(block, 16, 16, stride=1)
# feature map size = 56x56x32
self.layers_4n = self.get_layers(block, 16, 32, stride=2)
# feature map size = 28x28x64
self.layers_6n = self.get_layers(block, 32, 64, stride=2)
# output layers
self.avg_pool = nn.MaxPool2d(28, stride=1)
self.fc_out = nn.Linear(64, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out',
nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def get_layers(self, block, in_channels, out_channels, stride):
if stride == 2:
down_sample = True
else:
down_sample = False
layers_list = nn.ModuleList(
[block(in_channels, out_channels, stride, down_sample)])
for _ in range(self.num_layers - 1):
layers_list.append(block(out_channels, out_channels))
return nn.Sequential(*layers_list)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.layers_2n(x)
x = self.layers_4n(x)
x = self.layers_6n(x)
x = self.avg_pool(x)
x = x.view(x.size(0), -1)
x = self.fc_out(x)
return x
def resnet():
block = ResidualBlock
# total number of layers if 6n + 2. if n is 5 then the depth of network is 32.
model = ResNet(3, block)
return model
###Output
_____no_output_____
###Markdown
SE Resnet from https://github.com/moskomule/senet.pytorch/blob/master/senet/se_resnet.py
###Code
from torch.hub import load_state_dict_from_url
from torchvision.models import ResNet
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction, bias=False),
nn.ReLU(inplace=True),
nn.Linear(channel // reduction, channel, bias=False),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
return x * y.expand_as(x)
def conv3x3(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
class SEBasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes, 1)
self.bn2 = nn.BatchNorm2d(planes)
self.se = SELayer(planes, reduction)
self.downsample = downsample
self.stride = stride
self.dropout = nn.Dropout(p=0.5)
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.se(out)
out = self.dropout(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class SEBottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.se = SELayer(planes * 4, reduction)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out = self.se(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
def se_resnet18(num_classes=7):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [2, 2, 2, 2], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet34(num_classes=7):
"""Constructs a ResNet-34 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet50(num_classes=7, pretrained=False):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
if pretrained:
model.load_state_dict(load_state_dict_from_url(
"https://github.com/moskomule/senet.pytorch/releases/download/archive/seresnet50-60a8950a85b2b.pkl"))
return model
def se_resnet101(num_classes=7):
"""Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 23, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet152(num_classes=7):
"""Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 8, 36, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
###Output
_____no_output_____
###Markdown
**Import files**
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import torch
import torch.optim as optim
from torch.optim import lr_scheduler
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import argparse
from tensorboardX import SummaryWriter
###Output
_____no_output_____
###Markdown
Import dataset
###Code
import pickle
import random
from PIL import Image
print('==> Preparing data..')
cropped_image_list = []
label_list = []
img_path = './cropped/'
with open('result.pickle', 'rb') as f:
data = pickle.load(f)
transforms_train = transforms.Compose([
transforms.RandomCrop(112, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
transforms_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
#number of train set and test set
num_data=len(data.index)
num_test_data = int(num_data/10)
num_train_data = num_data - num_test_data
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
class_dict = {'JM': 0,'JN':1, 'JH':2, 'JK':3, 'RM':4, 'VV':5, 'SG':6 }
for i in data.index:
img = Image.open(img_path+data[0][i])
cropped_image_list.append(img)
label = class_dict[data[1][i]]
label_list.append(label)
#dataset making by random select
dataset_train = []
dataset_test = []
#random select test data index
rand = np.random.choice(np.arange(num_data), num_test_data, replace=False)
# class dataset() and __init__() looks good for calling and managing self variables
# but This time, tried to make simple without class
count_test=0
count_train=0
for i in np.arange(num_data):
if i in rand:
img = transforms_test(cropped_image_list[i])
dataset_test.append([img,label_list[i]])
else:
img = transforms_train(cropped_image_list[i])
dataset_train.append([img,label_list[i]])
#dataset_test = np.asarray(dataset_test)
#dataset_train = np.asarray(dataset_train)
###Output
_____no_output_____
###Markdown
Focal losshttps://github.com/foamliu/InsightFace-v2/blob/master/focal_loss.py
###Code
class FocalLoss(nn.Module):
def __init__(self, gamma=0):
super(FocalLoss, self).__init__()
self.gamma = gamma
self.ce = torch.nn.CrossEntropyLoss()
def forward(self, input, target):
logp = self.ce(input, target)
p = torch.exp(-logp)
loss = (1 - p) ** self.gamma * logp
return loss.mean()
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
exp_id=0
lr = 0.1
batch_size = 16
batch_size_test=2
num_worker=1
resume = None
logdir = './output/'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
train_loader = DataLoader(dataset_train, batch_size=batch_size,
shuffle=True, num_workers=num_worker)
test_loader = DataLoader(dataset_test, batch_size=batch_size_test,
shuffle=False, num_workers=num_worker)
# bts class
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
print('==> Making model..')
#net = resnet()
net = se_resnet18()
net = net.to(device)
num_params = sum(p.numel() for p in net.parameters() if p.requires_grad)
print('The number of parameters of model is', num_params)
# print(net)
if resume is not None:
checkpoint = torch.load('./save_model/' + str(exp_id))
net.load_state_dict(checkpoint['net'])
#criterion = nn.CrossEntropyLoss()
criterion = FocalLoss(gamma=2.0).to(device)
#criterion = nn.TripletMarginLoss(margin=1.0, p=2.0,
# eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')
optimizer = optim.Adam(net.parameters(), lr=0.001, weight_decay=1e-4)
decay_epoch = [2400, 3600]
step_lr_scheduler = lr_scheduler.MultiStepLR(optimizer,
milestones=decay_epoch, gamma=0.1)
writer = SummaryWriter(logdir)
def train(epoch, global_steps):
net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(train_loader):
global_steps += 1
step_lr_scheduler.step()
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('train epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(train_loader), train_loss/(batch_idx+1), acc))
writer.add_scalar('./log/train error', 100 - acc, global_steps)
return global_steps
def test(epoch, best_acc, global_steps):
net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('test epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(test_loader), test_loss/(batch_idx+1), acc))
writer.add_scalar('./log/test error', 100 - acc, global_steps)
if acc > best_acc:
print('==> Saving model..')
state = {
'net': net.state_dict(),
'acc': acc,
'epoch': epoch,
}
if not os.path.isdir('save_model'):
os.mkdir('save_model')
torch.save(state, './save_model/ckpt.pth')
best_acc = acc
return best_acc
if __name__=='__main__':
best_acc = 0
epoch = 0
global_steps = 0
if resume is not None:
test(epoch=0, best_acc=0)
else:
while True:
epoch += 1
global_steps = train(epoch, global_steps)
best_acc = test(epoch, best_acc, global_steps)
print('best test accuracy is ', best_acc)
if global_steps >= 4800:
break
# Any results you write to the current directory are saved as output.
###Output
_____no_output_____
###Markdown
Set resnet models
###Code
import torch.nn as nn
import torch.nn.functional as F
# code from https://github.com/KellerJordan/ResNet-PyTorch-CIFAR10/blob/master/model.py
class IdentityPadding(nn.Module):
def __init__(self, in_channels, out_channels, stride):
super(IdentityPadding, self).__init__()
self.pooling = nn.MaxPool2d(1, stride=stride)
self.add_channels = out_channels - in_channels
def forward(self, x):
out = F.pad(x, (0, 0, 0, 0, 0, self.add_channels))
out = self.pooling(out)
return out
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, down_sample=False):
super(ResidualBlock, self).__init__()
self.down_sample = down_sample
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.stride = stride
self.dropout = nn.Dropout(0.2)
if down_sample:
self.down_sample = IdentityPadding(in_channels, out_channels, stride)
else:
self.down_sample = None
def forward(self, x):
shortcut = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.dropout(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.dropout(out)
if self.down_sample is not None:
shortcut = self.down_sample(x)
out += shortcut
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, num_layers, block, num_classes=7):
super(ResNet, self).__init__()
self.num_layers = num_layers
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
# feature map size = 112x112x16
self.layers_2n = self.get_layers(block, 16, 16, stride=1)
# feature map size = 56x56x32
self.layers_4n = self.get_layers(block, 16, 32, stride=2)
# feature map size = 28x28x64
self.layers_6n = self.get_layers(block, 32, 64, stride=2)
# output layers
self.avg_pool = nn.MaxPool2d(28, stride=1)
self.fc_out = nn.Linear(64, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out',
nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def get_layers(self, block, in_channels, out_channels, stride):
if stride == 2:
down_sample = True
else:
down_sample = False
layers_list = nn.ModuleList(
[block(in_channels, out_channels, stride, down_sample)])
for _ in range(self.num_layers - 1):
layers_list.append(block(out_channels, out_channels))
return nn.Sequential(*layers_list)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.layers_2n(x)
x = self.layers_4n(x)
x = self.layers_6n(x)
x = self.avg_pool(x)
x = x.view(x.size(0), -1)
x = self.fc_out(x)
return x
def resnet():
block = ResidualBlock
# total number of layers if 6n + 2. if n is 5 then the depth of network is 32.
model = ResNet(3, block)
return model
###Output
_____no_output_____
###Markdown
SE Resnet from https://github.com/moskomule/senet.pytorch/blob/master/senet/se_resnet.py
###Code
from torch.hub import load_state_dict_from_url
from torchvision.models import ResNet
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction, bias=False),
nn.ReLU(inplace=True),
nn.Linear(channel // reduction, channel, bias=False),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
return x * y.expand_as(x)
def conv3x3(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
class SEBasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes, 1)
self.bn2 = nn.BatchNorm2d(planes)
self.se = SELayer(planes, reduction)
self.downsample = downsample
self.stride = stride
self.dropout = nn.Dropout(p=0.5)
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.se(out)
out = self.dropout(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class SEBottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.se = SELayer(planes * 4, reduction)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out = self.se(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
def se_resnet18(num_classes=7):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [2, 2, 2, 2], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet34(num_classes=7):
"""Constructs a ResNet-34 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet50(num_classes=7, pretrained=False):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
if pretrained:
model.load_state_dict(load_state_dict_from_url(
"https://github.com/moskomule/senet.pytorch/releases/download/archive/seresnet50-60a8950a85b2b.pkl"))
return model
def se_resnet101(num_classes=7):
"""Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 23, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet152(num_classes=7):
"""Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 8, 36, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
###Output
_____no_output_____
###Markdown
**Import files**
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import torch
import torch.optim as optim
from torch.optim import lr_scheduler
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import argparse
from tensorboardX import SummaryWriter
###Output
_____no_output_____
###Markdown
Import dataset
###Code
import pickle
import random
from PIL import Image
print('==> Preparing data..')
cropped_image_list = []
label_list = []
img_path = './cropped/'
with open('result.pickle', 'rb') as f:
data = pickle.load(f)
transforms_train = transforms.Compose([
transforms.RandomCrop(112, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
transforms_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
#number of train set and test set
num_data=len(data.index)
num_test_data = int(num_data/10)
num_train_data = num_data - num_test_data
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
class_dict = {'JM': 0,'JN':1, 'JH':2, 'JK':3, 'RM':4, 'VV':5, 'SG':6 }
for i in data.index:
img = Image.open(img_path+data[0][i])
cropped_image_list.append(img)
label = class_dict[data[1][i]]
label_list.append(label)
#dataset making by random select
dataset_train = []
dataset_test = []
#random select test data index
rand = np.random.choice(np.arange(num_data), num_test_data, replace=False)
# class dataset() and __init__() looks good for calling and managing self variables
# but This time, tried to make simple without class
count_test=0
count_train=0
for i in np.arange(num_data):
if i in rand:
img = transforms_test(cropped_image_list[i])
dataset_test.append([img,label_list[i]])
else:
img = transforms_train(cropped_image_list[i])
dataset_train.append([img,label_list[i]])
#dataset_test = np.asarray(dataset_test)
#dataset_train = np.asarray(dataset_train)
###Output
_____no_output_____
###Markdown
Focal losshttps://github.com/foamliu/InsightFace-v2/blob/master/focal_loss.py
###Code
class FocalLoss(nn.Module):
def __init__(self, gamma=0):
super(FocalLoss, self).__init__()
self.gamma = gamma
self.ce = torch.nn.CrossEntropyLoss()
def forward(self, input, target):
logp = self.ce(input, target)
p = torch.exp(-logp)
loss = (1 - p) ** self.gamma * logp
return loss.mean()
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
exp_id=0
lr = 0.1
batch_size = 16
batch_size_test=2
num_worker=1
resume = None
logdir = './output/'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
train_loader = DataLoader(dataset_train, batch_size=batch_size,
shuffle=True, num_workers=num_worker)
test_loader = DataLoader(dataset_test, batch_size=batch_size_test,
shuffle=False, num_workers=num_worker)
# bts class
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
print('==> Making model..')
#net = resnet()
net = se_resnet18()
net = net.to(device)
num_params = sum(p.numel() for p in net.parameters() if p.requires_grad)
print('The number of parameters of model is', num_params)
# print(net)
if resume is not None:
checkpoint = torch.load('./save_model/' + str(exp_id))
net.load_state_dict(checkpoint['net'])
#criterion = nn.CrossEntropyLoss()
criterion = FocalLoss(gamma=2.0).to(device)
#criterion = nn.TripletMarginLoss(margin=1.0, p=2.0,
# eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')
optimizer = optim.Adam(net.parameters(), lr=0.001, weight_decay=1e-4)
decay_epoch = [2400, 3600]
step_lr_scheduler = lr_scheduler.MultiStepLR(optimizer,
milestones=decay_epoch, gamma=0.1)
writer = SummaryWriter(logdir)
def train(epoch, global_steps):
net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(train_loader):
global_steps += 1
step_lr_scheduler.step()
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('train epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(train_loader), train_loss/(batch_idx+1), acc))
writer.add_scalar('./log/train error', 100 - acc, global_steps)
return global_steps
def test(epoch, best_acc, global_steps):
net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('test epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(test_loader), test_loss/(batch_idx+1), acc))
writer.add_scalar('./log/test error', 100 - acc, global_steps)
if acc > best_acc:
print('==> Saving model..')
state = {
'net': net.state_dict(),
'acc': acc,
'epoch': epoch,
}
if not os.path.isdir('save_model'):
os.mkdir('save_model')
torch.save(state, './save_model/ckpt.pth')
best_acc = acc
return best_acc
if __name__=='__main__':
best_acc = 0
epoch = 0
global_steps = 0
if resume is not None:
test(epoch=0, best_acc=0)
else:
while True:
epoch += 1
global_steps = train(epoch, global_steps)
best_acc = test(epoch, best_acc, global_steps)
print('best test accuracy is ', best_acc)
if global_steps >= 4800:
break
# Any results you write to the current directory are saved as output.
###Output
_____no_output_____
###Markdown
Set resnet models
###Code
import torch.nn as nn
import torch.nn.functional as F
# code from https://github.com/KellerJordan/ResNet-PyTorch-CIFAR10/blob/master/model.py
class IdentityPadding(nn.Module):
def __init__(self, in_channels, out_channels, stride):
super(IdentityPadding, self).__init__()
self.pooling = nn.MaxPool2d(1, stride=stride)
self.add_channels = out_channels - in_channels
def forward(self, x):
out = F.pad(x, (0, 0, 0, 0, 0, self.add_channels))
out = self.pooling(out)
return out
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, down_sample=False):
super(ResidualBlock, self).__init__()
self.down_sample = down_sample
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.stride = stride
self.dropout = nn.Dropout(0.2)
if down_sample:
self.down_sample = IdentityPadding(in_channels, out_channels, stride)
else:
self.down_sample = None
def forward(self, x):
shortcut = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.dropout(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.dropout(out)
if self.down_sample is not None:
shortcut = self.down_sample(x)
out += shortcut
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, num_layers, block, num_classes=7):
super(ResNet, self).__init__()
self.num_layers = num_layers
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
# feature map size = 112x112x16
self.layers_2n = self.get_layers(block, 16, 16, stride=1)
# feature map size = 56x56x32
self.layers_4n = self.get_layers(block, 16, 32, stride=2)
# feature map size = 28x28x64
self.layers_6n = self.get_layers(block, 32, 64, stride=2)
# output layers
self.avg_pool = nn.MaxPool2d(28, stride=1)
self.fc_out = nn.Linear(64, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out',
nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def get_layers(self, block, in_channels, out_channels, stride):
if stride == 2:
down_sample = True
else:
down_sample = False
layers_list = nn.ModuleList(
[block(in_channels, out_channels, stride, down_sample)])
for _ in range(self.num_layers - 1):
layers_list.append(block(out_channels, out_channels))
return nn.Sequential(*layers_list)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.layers_2n(x)
x = self.layers_4n(x)
x = self.layers_6n(x)
x = self.avg_pool(x)
x = x.view(x.size(0), -1)
x = self.fc_out(x)
return x
def resnet():
block = ResidualBlock
# total number of layers if 6n + 2. if n is 5 then the depth of network is 32.
model = ResNet(3, block)
return model
###Output
_____no_output_____
###Markdown
SE Resnet from https://github.com/moskomule/senet.pytorch/blob/master/senet/se_resnet.py
###Code
from torch.hub import load_state_dict_from_url
from torchvision.models import ResNet
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction, bias=False),
nn.ReLU(inplace=True),
nn.Linear(channel // reduction, channel, bias=False),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
return x * y.expand_as(x)
def conv3x3(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
class SEBasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes, 1)
self.bn2 = nn.BatchNorm2d(planes)
self.se = SELayer(planes, reduction)
self.downsample = downsample
self.stride = stride
self.dropout = nn.Dropout(p=0.5)
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.se(out)
out = self.dropout(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class SEBottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.se = SELayer(planes * 4, reduction)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out = self.se(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
def se_resnet18(num_classes=7):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [2, 2, 2, 2], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet34(num_classes=7):
"""Constructs a ResNet-34 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet50(num_classes=7, pretrained=False):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
if pretrained:
model.load_state_dict(load_state_dict_from_url(
"https://github.com/moskomule/senet.pytorch/releases/download/archive/seresnet50-60a8950a85b2b.pkl"))
return model
def se_resnet101(num_classes=7):
"""Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 23, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet152(num_classes=7):
"""Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 8, 36, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
###Output
_____no_output_____
###Markdown
**Import files**
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import torch
import torch.optim as optim
from torch.optim import lr_scheduler
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import argparse
from tensorboardX import SummaryWriter
###Output
_____no_output_____
###Markdown
Import dataset
###Code
import pickle
import random
from PIL import Image
print('==> Preparing data..')
cropped_image_list = []
label_list = []
img_path = './cropped/'
with open('result.pickle', 'rb') as f:
data = pickle.load(f)
transforms_train = transforms.Compose([
transforms.RandomCrop(112, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
transforms_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
#number of train set and test set
num_data=len(data.index)
num_test_data = int(num_data/10)
num_train_data = num_data - num_test_data
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
class_dict = {'JM': 0,'JN':1, 'JH':2, 'JK':3, 'RM':4, 'VV':5, 'SG':6 }
for i in data.index:
img = Image.open(img_path+data[0][i])
cropped_image_list.append(img)
label = class_dict[data[1][i]]
label_list.append(label)
#dataset making by random select
dataset_train = []
dataset_test = []
#random select test data index
rand = np.random.choice(np.arange(num_data), num_test_data, replace=False)
# class dataset() and __init__() looks good for calling and managing self variables
# but This time, tried to make simple without class
count_test=0
count_train=0
for i in np.arange(num_data):
if i in rand:
img = transforms_test(cropped_image_list[i])
dataset_test.append([img,label_list[i]])
else:
img = transforms_train(cropped_image_list[i])
dataset_train.append([img,label_list[i]])
#dataset_test = np.asarray(dataset_test)
#dataset_train = np.asarray(dataset_train)
###Output
_____no_output_____
###Markdown
Focal losshttps://github.com/foamliu/InsightFace-v2/blob/master/focal_loss.py
###Code
class FocalLoss(nn.Module):
def __init__(self, gamma=0):
super(FocalLoss, self).__init__()
self.gamma = gamma
self.ce = torch.nn.CrossEntropyLoss()
def forward(self, input, target):
logp = self.ce(input, target)
p = torch.exp(-logp)
loss = (1 - p) ** self.gamma * logp
return loss.mean()
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
exp_id=0
lr = 0.1
batch_size = 16
batch_size_test=2
num_worker=1
resume = None
logdir = './output/'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
train_loader = DataLoader(dataset_train, batch_size=batch_size,
shuffle=True, num_workers=num_worker)
test_loader = DataLoader(dataset_test, batch_size=batch_size_test,
shuffle=False, num_workers=num_worker)
# bts class
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
print('==> Making model..')
#net = resnet()
net = se_resnet18()
net = net.to(device)
num_params = sum(p.numel() for p in net.parameters() if p.requires_grad)
print('The number of parameters of model is', num_params)
# print(net)
if resume is not None:
checkpoint = torch.load('./save_model/' + str(exp_id))
net.load_state_dict(checkpoint['net'])
#criterion = nn.CrossEntropyLoss()
criterion = FocalLoss(gamma=2.0).to(device)
#criterion = nn.TripletMarginLoss(margin=1.0, p=2.0,
# eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')
optimizer = optim.Adam(net.parameters(), lr=0.001, weight_decay=1e-4)
decay_epoch = [2400, 3600]
step_lr_scheduler = lr_scheduler.MultiStepLR(optimizer,
milestones=decay_epoch, gamma=0.1)
writer = SummaryWriter(logdir)
def train(epoch, global_steps):
net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(train_loader):
global_steps += 1
step_lr_scheduler.step()
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('train epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(train_loader), train_loss/(batch_idx+1), acc))
writer.add_scalar('./log/train error', 100 - acc, global_steps)
return global_steps
def test(epoch, best_acc, global_steps):
net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('test epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(test_loader), test_loss/(batch_idx+1), acc))
writer.add_scalar('./log/test error', 100 - acc, global_steps)
if acc > best_acc:
print('==> Saving model..')
state = {
'net': net.state_dict(),
'acc': acc,
'epoch': epoch,
}
if not os.path.isdir('save_model'):
os.mkdir('save_model')
torch.save(state, './save_model/ckpt.pth')
best_acc = acc
return best_acc
if __name__=='__main__':
best_acc = 0
epoch = 0
global_steps = 0
if resume is not None:
test(epoch=0, best_acc=0)
else:
while True:
epoch += 1
global_steps = train(epoch, global_steps)
best_acc = test(epoch, best_acc, global_steps)
print('best test accuracy is ', best_acc)
if global_steps >= 4800:
break
# Any results you write to the current directory are saved as output.
###Output
_____no_output_____
###Markdown
Set resnet models
###Code
import torch.nn as nn
import torch.nn.functional as F
# code from https://github.com/KellerJordan/ResNet-PyTorch-CIFAR10/blob/master/model.py
class IdentityPadding(nn.Module):
def __init__(self, in_channels, out_channels, stride):
super(IdentityPadding, self).__init__()
self.pooling = nn.MaxPool2d(1, stride=stride)
self.add_channels = out_channels - in_channels
def forward(self, x):
out = F.pad(x, (0, 0, 0, 0, 0, self.add_channels))
out = self.pooling(out)
return out
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, down_sample=False):
super(ResidualBlock, self).__init__()
self.down_sample = down_sample
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.stride = stride
self.dropout = nn.Dropout(0.2)
if down_sample:
self.down_sample = IdentityPadding(in_channels, out_channels, stride)
else:
self.down_sample = None
def forward(self, x):
shortcut = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.dropout(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.dropout(out)
if self.down_sample is not None:
shortcut = self.down_sample(x)
out += shortcut
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, num_layers, block, num_classes=7):
super(ResNet, self).__init__()
self.num_layers = num_layers
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
# feature map size = 112x112x16
self.layers_2n = self.get_layers(block, 16, 16, stride=1)
# feature map size = 56x56x32
self.layers_4n = self.get_layers(block, 16, 32, stride=2)
# feature map size = 28x28x64
self.layers_6n = self.get_layers(block, 32, 64, stride=2)
# output layers
self.avg_pool = nn.MaxPool2d(28, stride=1)
self.fc_out = nn.Linear(64, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out',
nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def get_layers(self, block, in_channels, out_channels, stride):
if stride == 2:
down_sample = True
else:
down_sample = False
layers_list = nn.ModuleList(
[block(in_channels, out_channels, stride, down_sample)])
for _ in range(self.num_layers - 1):
layers_list.append(block(out_channels, out_channels))
return nn.Sequential(*layers_list)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.layers_2n(x)
x = self.layers_4n(x)
x = self.layers_6n(x)
x = self.avg_pool(x)
x = x.view(x.size(0), -1)
x = self.fc_out(x)
return x
def resnet():
block = ResidualBlock
# total number of layers if 6n + 2. if n is 5 then the depth of network is 32.
model = ResNet(3, block)
return model
###Output
_____no_output_____
###Markdown
SE Resnet from https://github.com/moskomule/senet.pytorch/blob/master/senet/se_resnet.py
###Code
from torch.hub import load_state_dict_from_url
from torchvision.models import ResNet
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction, bias=False),
nn.ReLU(inplace=True),
nn.Linear(channel // reduction, channel, bias=False),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
return x * y.expand_as(x)
def conv3x3(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
class SEBasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes, 1)
self.bn2 = nn.BatchNorm2d(planes)
self.se = SELayer(planes, reduction)
self.downsample = downsample
self.stride = stride
self.dropout = nn.Dropout(p=0.5)
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.se(out)
out = self.dropout(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class SEBottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.se = SELayer(planes * 4, reduction)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out = self.se(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
def se_resnet18(num_classes=7):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [2, 2, 2, 2], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet34(num_classes=7):
"""Constructs a ResNet-34 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet50(num_classes=7, pretrained=False):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
if pretrained:
model.load_state_dict(load_state_dict_from_url(
"https://github.com/moskomule/senet.pytorch/releases/download/archive/seresnet50-60a8950a85b2b.pkl"))
return model
def se_resnet101(num_classes=7):
"""Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 23, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet152(num_classes=7):
"""Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 8, 36, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
###Output
_____no_output_____
###Markdown
**Import files**
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import torch
import torch.optim as optim
from torch.optim import lr_scheduler
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import argparse
from tensorboardX import SummaryWriter
###Output
_____no_output_____
###Markdown
Import dataset
###Code
import pickle
import random
from PIL import Image
print('==> Preparing data..')
cropped_image_list = []
label_list = []
img_path = './cropped/'
with open('result.pickle', 'rb') as f:
data = pickle.load(f)
transforms_train = transforms.Compose([
transforms.RandomCrop(112, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
transforms_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
#number of train set and test set
num_data=len(data.index)
num_test_data = int(num_data/10)
num_train_data = num_data - num_test_data
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
class_dict = {'JM': 0,'JN':1, 'JH':2, 'JK':3, 'RM':4, 'VV':5, 'SG':6 }
for i in data.index:
img = Image.open(img_path+data[0][i])
cropped_image_list.append(img)
label = class_dict[data[1][i]]
label_list.append(label)
#dataset making by random select
dataset_train = []
dataset_test = []
#random select test data index
rand = np.random.choice(np.arange(num_data), num_test_data, replace=False)
# class dataset() and __init__() looks good for calling and managing self variables
# but This time, tried to make simple without class
count_test=0
count_train=0
for i in np.arange(num_data):
if i in rand:
img = transforms_test(cropped_image_list[i])
dataset_test.append([img,label_list[i]])
else:
img = transforms_train(cropped_image_list[i])
dataset_train.append([img,label_list[i]])
#dataset_test = np.asarray(dataset_test)
#dataset_train = np.asarray(dataset_train)
###Output
_____no_output_____
###Markdown
Focal losshttps://github.com/foamliu/InsightFace-v2/blob/master/focal_loss.py
###Code
class FocalLoss(nn.Module):
def __init__(self, gamma=0):
super(FocalLoss, self).__init__()
self.gamma = gamma
self.ce = torch.nn.CrossEntropyLoss()
def forward(self, input, target):
logp = self.ce(input, target)
p = torch.exp(-logp)
loss = (1 - p) ** self.gamma * logp
return loss.mean()
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
exp_id=0
lr = 0.1
batch_size = 16
batch_size_test=2
num_worker=1
resume = None
logdir = './output/'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
train_loader = DataLoader(dataset_train, batch_size=batch_size,
shuffle=True, num_workers=num_worker)
test_loader = DataLoader(dataset_test, batch_size=batch_size_test,
shuffle=False, num_workers=num_worker)
# bts class
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
print('==> Making model..')
#net = resnet()
net = se_resnet18()
net = net.to(device)
num_params = sum(p.numel() for p in net.parameters() if p.requires_grad)
print('The number of parameters of model is', num_params)
# print(net)
if resume is not None:
checkpoint = torch.load('./save_model/' + str(exp_id))
net.load_state_dict(checkpoint['net'])
#criterion = nn.CrossEntropyLoss()
criterion = FocalLoss(gamma=2.0).to(device)
#criterion = nn.TripletMarginLoss(margin=1.0, p=2.0,
# eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')
optimizer = optim.Adam(net.parameters(), lr=0.001, weight_decay=1e-4)
decay_epoch = [2400, 3600]
step_lr_scheduler = lr_scheduler.MultiStepLR(optimizer,
milestones=decay_epoch, gamma=0.1)
writer = SummaryWriter(logdir)
def train(epoch, global_steps):
net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(train_loader):
global_steps += 1
step_lr_scheduler.step()
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('train epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(train_loader), train_loss/(batch_idx+1), acc))
writer.add_scalar('./log/train error', 100 - acc, global_steps)
return global_steps
def test(epoch, best_acc, global_steps):
net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('test epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(test_loader), test_loss/(batch_idx+1), acc))
writer.add_scalar('./log/test error', 100 - acc, global_steps)
if acc > best_acc:
print('==> Saving model..')
state = {
'net': net.state_dict(),
'acc': acc,
'epoch': epoch,
}
if not os.path.isdir('save_model'):
os.mkdir('save_model')
torch.save(state, './save_model/ckpt.pth')
best_acc = acc
return best_acc
if __name__=='__main__':
best_acc = 0
epoch = 0
global_steps = 0
if resume is not None:
test(epoch=0, best_acc=0)
else:
while True:
epoch += 1
global_steps = train(epoch, global_steps)
best_acc = test(epoch, best_acc, global_steps)
print('best test accuracy is ', best_acc)
if global_steps >= 4800:
break
# Any results you write to the current directory are saved as output.
###Output
_____no_output_____
###Markdown
Set resnet models
###Code
import torch.nn as nn
import torch.nn.functional as F
# code from https://github.com/KellerJordan/ResNet-PyTorch-CIFAR10/blob/master/model.py
class IdentityPadding(nn.Module):
def __init__(self, in_channels, out_channels, stride):
super(IdentityPadding, self).__init__()
self.pooling = nn.MaxPool2d(1, stride=stride)
self.add_channels = out_channels - in_channels
def forward(self, x):
out = F.pad(x, (0, 0, 0, 0, 0, self.add_channels))
out = self.pooling(out)
return out
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, down_sample=False):
super(ResidualBlock, self).__init__()
self.down_sample = down_sample
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.stride = stride
self.dropout = nn.Dropout(0.2)
if down_sample:
self.down_sample = IdentityPadding(in_channels, out_channels, stride)
else:
self.down_sample = None
def forward(self, x):
shortcut = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.dropout(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.dropout(out)
if self.down_sample is not None:
shortcut = self.down_sample(x)
out += shortcut
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, num_layers, block, num_classes=7):
super(ResNet, self).__init__()
self.num_layers = num_layers
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
# feature map size = 112x112x16
self.layers_2n = self.get_layers(block, 16, 16, stride=1)
# feature map size = 56x56x32
self.layers_4n = self.get_layers(block, 16, 32, stride=2)
# feature map size = 28x28x64
self.layers_6n = self.get_layers(block, 32, 64, stride=2)
# output layers
self.avg_pool = nn.MaxPool2d(28, stride=1)
self.fc_out = nn.Linear(64, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out',
nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def get_layers(self, block, in_channels, out_channels, stride):
if stride == 2:
down_sample = True
else:
down_sample = False
layers_list = nn.ModuleList(
[block(in_channels, out_channels, stride, down_sample)])
for _ in range(self.num_layers - 1):
layers_list.append(block(out_channels, out_channels))
return nn.Sequential(*layers_list)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.layers_2n(x)
x = self.layers_4n(x)
x = self.layers_6n(x)
x = self.avg_pool(x)
x = x.view(x.size(0), -1)
x = self.fc_out(x)
return x
def resnet():
block = ResidualBlock
# total number of layers if 6n + 2. if n is 5 then the depth of network is 32.
model = ResNet(3, block)
return model
###Output
_____no_output_____
###Markdown
SE Resnet from https://github.com/moskomule/senet.pytorch/blob/master/senet/se_resnet.py
###Code
from torch.hub import load_state_dict_from_url
from torchvision.models import ResNet
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction, bias=False),
nn.ReLU(inplace=True),
nn.Linear(channel // reduction, channel, bias=False),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
return x * y.expand_as(x)
def conv3x3(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
class SEBasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes, 1)
self.bn2 = nn.BatchNorm2d(planes)
self.se = SELayer(planes, reduction)
self.downsample = downsample
self.stride = stride
self.dropout = nn.Dropout(p=0.5)
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.se(out)
out = self.dropout(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class SEBottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.se = SELayer(planes * 4, reduction)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out = self.se(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
def se_resnet18(num_classes=7):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [2, 2, 2, 2], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet34(num_classes=7):
"""Constructs a ResNet-34 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet50(num_classes=7, pretrained=False):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
if pretrained:
model.load_state_dict(load_state_dict_from_url(
"https://github.com/moskomule/senet.pytorch/releases/download/archive/seresnet50-60a8950a85b2b.pkl"))
return model
def se_resnet101(num_classes=7):
"""Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 23, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet152(num_classes=7):
"""Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 8, 36, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
###Output
_____no_output_____
###Markdown
**Import files**
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import torch
import torch.optim as optim
from torch.optim import lr_scheduler
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import argparse
from tensorboardX import SummaryWriter
###Output
_____no_output_____
###Markdown
Import dataset
###Code
import pickle
import random
from PIL import Image
print('==> Preparing data..')
cropped_image_list = []
label_list = []
img_path = './cropped/'
with open('result.pickle', 'rb') as f:
data = pickle.load(f)
transforms_train = transforms.Compose([
transforms.RandomCrop(112, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
transforms_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
#number of train set and test set
num_data=len(data.index)
num_test_data = int(num_data/10)
num_train_data = num_data - num_test_data
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
class_dict = {'JM': 0,'JN':1, 'JH':2, 'JK':3, 'RM':4, 'VV':5, 'SG':6 }
for i in data.index:
img = Image.open(img_path+data[0][i])
cropped_image_list.append(img)
label = class_dict[data[1][i]]
label_list.append(label)
#dataset making by random select
dataset_train = []
dataset_test = []
#random select test data index
rand = np.random.choice(np.arange(num_data), num_test_data, replace=False)
# class dataset() and __init__() looks good for calling and managing self variables
# but This time, tried to make simple without class
count_test=0
count_train=0
for i in np.arange(num_data):
if i in rand:
img = transforms_test(cropped_image_list[i])
dataset_test.append([img,label_list[i]])
else:
img = transforms_train(cropped_image_list[i])
dataset_train.append([img,label_list[i]])
#dataset_test = np.asarray(dataset_test)
#dataset_train = np.asarray(dataset_train)
###Output
_____no_output_____
###Markdown
Focal losshttps://github.com/foamliu/InsightFace-v2/blob/master/focal_loss.py
###Code
class FocalLoss(nn.Module):
def __init__(self, gamma=0):
super(FocalLoss, self).__init__()
self.gamma = gamma
self.ce = torch.nn.CrossEntropyLoss()
def forward(self, input, target):
logp = self.ce(input, target)
p = torch.exp(-logp)
loss = (1 - p) ** self.gamma * logp
return loss.mean()
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
exp_id=0
lr = 0.1
batch_size = 16
batch_size_test=2
num_worker=1
resume = None
logdir = './output/'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
train_loader = DataLoader(dataset_train, batch_size=batch_size,
shuffle=True, num_workers=num_worker)
test_loader = DataLoader(dataset_test, batch_size=batch_size_test,
shuffle=False, num_workers=num_worker)
# bts class
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
print('==> Making model..')
#net = resnet()
net = se_resnet18()
net = net.to(device)
num_params = sum(p.numel() for p in net.parameters() if p.requires_grad)
print('The number of parameters of model is', num_params)
# print(net)
if resume is not None:
checkpoint = torch.load('./save_model/' + str(exp_id))
net.load_state_dict(checkpoint['net'])
#criterion = nn.CrossEntropyLoss()
criterion = FocalLoss(gamma=2.0).to(device)
#criterion = nn.TripletMarginLoss(margin=1.0, p=2.0,
# eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')
optimizer = optim.Adam(net.parameters(), lr=0.001, weight_decay=1e-4)
decay_epoch = [2400, 3600]
step_lr_scheduler = lr_scheduler.MultiStepLR(optimizer,
milestones=decay_epoch, gamma=0.1)
writer = SummaryWriter(logdir)
def train(epoch, global_steps):
net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(train_loader):
global_steps += 1
step_lr_scheduler.step()
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('train epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(train_loader), train_loss/(batch_idx+1), acc))
writer.add_scalar('./log/train error', 100 - acc, global_steps)
return global_steps
def test(epoch, best_acc, global_steps):
net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('test epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(test_loader), test_loss/(batch_idx+1), acc))
writer.add_scalar('./log/test error', 100 - acc, global_steps)
if acc > best_acc:
print('==> Saving model..')
state = {
'net': net.state_dict(),
'acc': acc,
'epoch': epoch,
}
if not os.path.isdir('save_model'):
os.mkdir('save_model')
torch.save(state, './save_model/ckpt.pth')
best_acc = acc
return best_acc
if __name__=='__main__':
best_acc = 0
epoch = 0
global_steps = 0
if resume is not None:
test(epoch=0, best_acc=0)
else:
while True:
epoch += 1
global_steps = train(epoch, global_steps)
best_acc = test(epoch, best_acc, global_steps)
print('best test accuracy is ', best_acc)
if global_steps >= 4800:
break
# Any results you write to the current directory are saved as output.
###Output
_____no_output_____
###Markdown
Set resnet models
###Code
import torch.nn as nn
import torch.nn.functional as F
# code from https://github.com/KellerJordan/ResNet-PyTorch-CIFAR10/blob/master/model.py
class IdentityPadding(nn.Module):
def __init__(self, in_channels, out_channels, stride):
super(IdentityPadding, self).__init__()
self.pooling = nn.MaxPool2d(1, stride=stride)
self.add_channels = out_channels - in_channels
def forward(self, x):
out = F.pad(x, (0, 0, 0, 0, 0, self.add_channels))
out = self.pooling(out)
return out
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, down_sample=False):
super(ResidualBlock, self).__init__()
self.down_sample = down_sample
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.stride = stride
self.dropout = nn.Dropout(0.2)
if down_sample:
self.down_sample = IdentityPadding(in_channels, out_channels, stride)
else:
self.down_sample = None
def forward(self, x):
shortcut = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.dropout(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.dropout(out)
if self.down_sample is not None:
shortcut = self.down_sample(x)
out += shortcut
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, num_layers, block, num_classes=7):
super(ResNet, self).__init__()
self.num_layers = num_layers
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
# feature map size = 112x112x16
self.layers_2n = self.get_layers(block, 16, 16, stride=1)
# feature map size = 56x56x32
self.layers_4n = self.get_layers(block, 16, 32, stride=2)
# feature map size = 28x28x64
self.layers_6n = self.get_layers(block, 32, 64, stride=2)
# output layers
self.avg_pool = nn.MaxPool2d(28, stride=1)
self.fc_out = nn.Linear(64, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out',
nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def get_layers(self, block, in_channels, out_channels, stride):
if stride == 2:
down_sample = True
else:
down_sample = False
layers_list = nn.ModuleList(
[block(in_channels, out_channels, stride, down_sample)])
for _ in range(self.num_layers - 1):
layers_list.append(block(out_channels, out_channels))
return nn.Sequential(*layers_list)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.layers_2n(x)
x = self.layers_4n(x)
x = self.layers_6n(x)
x = self.avg_pool(x)
x = x.view(x.size(0), -1)
x = self.fc_out(x)
return x
def resnet():
block = ResidualBlock
# total number of layers if 6n + 2. if n is 5 then the depth of network is 32.
model = ResNet(3, block)
return model
###Output
_____no_output_____
###Markdown
SE Resnet from https://github.com/moskomule/senet.pytorch/blob/master/senet/se_resnet.py
###Code
from torch.hub import load_state_dict_from_url
from torchvision.models import ResNet
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction, bias=False),
nn.ReLU(inplace=True),
nn.Linear(channel // reduction, channel, bias=False),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
return x * y.expand_as(x)
def conv3x3(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
class SEBasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes, 1)
self.bn2 = nn.BatchNorm2d(planes)
self.se = SELayer(planes, reduction)
self.downsample = downsample
self.stride = stride
self.dropout = nn.Dropout(p=0.5)
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.se(out)
out = self.dropout(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class SEBottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.se = SELayer(planes * 4, reduction)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out = self.se(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
def se_resnet18(num_classes=7):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [2, 2, 2, 2], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet34(num_classes=7):
"""Constructs a ResNet-34 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet50(num_classes=7, pretrained=False):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
if pretrained:
model.load_state_dict(load_state_dict_from_url(
"https://github.com/moskomule/senet.pytorch/releases/download/archive/seresnet50-60a8950a85b2b.pkl"))
return model
def se_resnet101(num_classes=7):
"""Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 23, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet152(num_classes=7):
"""Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 8, 36, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
###Output
_____no_output_____
###Markdown
**Import files**
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import torch
import torch.optim as optim
from torch.optim import lr_scheduler
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import argparse
from tensorboardX import SummaryWriter
###Output
_____no_output_____
###Markdown
Import dataset
###Code
import pickle
import random
from PIL import Image
print('==> Preparing data..')
cropped_image_list = []
label_list = []
img_path = './cropped/'
with open('result.pickle', 'rb') as f:
data = pickle.load(f)
transforms_train = transforms.Compose([
transforms.RandomCrop(112, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
transforms_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
#number of train set and test set
num_data=len(data.index)
num_test_data = int(num_data/10)
num_train_data = num_data - num_test_data
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
class_dict = {'JM': 0,'JN':1, 'JH':2, 'JK':3, 'RM':4, 'VV':5, 'SG':6 }
for i in data.index:
img = Image.open(img_path+data[0][i])
cropped_image_list.append(img)
label = class_dict[data[1][i]]
label_list.append(label)
#dataset making by random select
dataset_train = []
dataset_test = []
#random select test data index
rand = np.random.choice(np.arange(num_data), num_test_data, replace=False)
# class dataset() and __init__() looks good for calling and managing self variables
# but This time, tried to make simple without class
count_test=0
count_train=0
for i in np.arange(num_data):
if i in rand:
img = transforms_test(cropped_image_list[i])
dataset_test.append([img,label_list[i]])
else:
img = transforms_train(cropped_image_list[i])
dataset_train.append([img,label_list[i]])
#dataset_test = np.asarray(dataset_test)
#dataset_train = np.asarray(dataset_train)
###Output
_____no_output_____
###Markdown
Focal losshttps://github.com/foamliu/InsightFace-v2/blob/master/focal_loss.py
###Code
class FocalLoss(nn.Module):
def __init__(self, gamma=0):
super(FocalLoss, self).__init__()
self.gamma = gamma
self.ce = torch.nn.CrossEntropyLoss()
def forward(self, input, target):
logp = self.ce(input, target)
p = torch.exp(-logp)
loss = (1 - p) ** self.gamma * logp
return loss.mean()
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
exp_id=0
lr = 0.1
batch_size = 16
batch_size_test=2
num_worker=1
resume = None
logdir = './output/'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
train_loader = DataLoader(dataset_train, batch_size=batch_size,
shuffle=True, num_workers=num_worker)
test_loader = DataLoader(dataset_test, batch_size=batch_size_test,
shuffle=False, num_workers=num_worker)
# bts class
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
print('==> Making model..')
#net = resnet()
net = se_resnet18()
net = net.to(device)
num_params = sum(p.numel() for p in net.parameters() if p.requires_grad)
print('The number of parameters of model is', num_params)
# print(net)
if resume is not None:
checkpoint = torch.load('./save_model/' + str(exp_id))
net.load_state_dict(checkpoint['net'])
#criterion = nn.CrossEntropyLoss()
criterion = FocalLoss(gamma=2.0).to(device)
#criterion = nn.TripletMarginLoss(margin=1.0, p=2.0,
# eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')
optimizer = optim.Adam(net.parameters(), lr=0.001, weight_decay=1e-4)
decay_epoch = [2400, 3600]
step_lr_scheduler = lr_scheduler.MultiStepLR(optimizer,
milestones=decay_epoch, gamma=0.1)
writer = SummaryWriter(logdir)
def train(epoch, global_steps):
net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(train_loader):
global_steps += 1
step_lr_scheduler.step()
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('train epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(train_loader), train_loss/(batch_idx+1), acc))
writer.add_scalar('./log/train error', 100 - acc, global_steps)
return global_steps
def test(epoch, best_acc, global_steps):
net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('test epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(test_loader), test_loss/(batch_idx+1), acc))
writer.add_scalar('./log/test error', 100 - acc, global_steps)
if acc > best_acc:
print('==> Saving model..')
state = {
'net': net.state_dict(),
'acc': acc,
'epoch': epoch,
}
if not os.path.isdir('save_model'):
os.mkdir('save_model')
torch.save(state, './save_model/ckpt.pth')
best_acc = acc
return best_acc
if __name__=='__main__':
best_acc = 0
epoch = 0
global_steps = 0
if resume is not None:
test(epoch=0, best_acc=0)
else:
while True:
epoch += 1
global_steps = train(epoch, global_steps)
best_acc = test(epoch, best_acc, global_steps)
print('best test accuracy is ', best_acc)
if global_steps >= 4800:
break
# Any results you write to the current directory are saved as output.
###Output
_____no_output_____
###Markdown
Set resnet models
###Code
import torch.nn as nn
import torch.nn.functional as F
# code from https://github.com/KellerJordan/ResNet-PyTorch-CIFAR10/blob/master/model.py
class IdentityPadding(nn.Module):
def __init__(self, in_channels, out_channels, stride):
super(IdentityPadding, self).__init__()
self.pooling = nn.MaxPool2d(1, stride=stride)
self.add_channels = out_channels - in_channels
def forward(self, x):
out = F.pad(x, (0, 0, 0, 0, 0, self.add_channels))
out = self.pooling(out)
return out
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, down_sample=False):
super(ResidualBlock, self).__init__()
self.down_sample = down_sample
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.stride = stride
self.dropout = nn.Dropout(0.2)
if down_sample:
self.down_sample = IdentityPadding(in_channels, out_channels, stride)
else:
self.down_sample = None
def forward(self, x):
shortcut = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.dropout(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.dropout(out)
if self.down_sample is not None:
shortcut = self.down_sample(x)
out += shortcut
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, num_layers, block, num_classes=7):
super(ResNet, self).__init__()
self.num_layers = num_layers
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
# feature map size = 112x112x16
self.layers_2n = self.get_layers(block, 16, 16, stride=1)
# feature map size = 56x56x32
self.layers_4n = self.get_layers(block, 16, 32, stride=2)
# feature map size = 28x28x64
self.layers_6n = self.get_layers(block, 32, 64, stride=2)
# output layers
self.avg_pool = nn.MaxPool2d(28, stride=1)
self.fc_out = nn.Linear(64, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out',
nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def get_layers(self, block, in_channels, out_channels, stride):
if stride == 2:
down_sample = True
else:
down_sample = False
layers_list = nn.ModuleList(
[block(in_channels, out_channels, stride, down_sample)])
for _ in range(self.num_layers - 1):
layers_list.append(block(out_channels, out_channels))
return nn.Sequential(*layers_list)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.layers_2n(x)
x = self.layers_4n(x)
x = self.layers_6n(x)
x = self.avg_pool(x)
x = x.view(x.size(0), -1)
x = self.fc_out(x)
return x
def resnet():
block = ResidualBlock
# total number of layers if 6n + 2. if n is 5 then the depth of network is 32.
model = ResNet(3, block)
return model
###Output
_____no_output_____
###Markdown
SE Resnet from https://github.com/moskomule/senet.pytorch/blob/master/senet/se_resnet.py
###Code
from torch.hub import load_state_dict_from_url
from torchvision.models import ResNet
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction, bias=False),
nn.ReLU(inplace=True),
nn.Linear(channel // reduction, channel, bias=False),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
return x * y.expand_as(x)
def conv3x3(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
class SEBasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes, 1)
self.bn2 = nn.BatchNorm2d(planes)
self.se = SELayer(planes, reduction)
self.downsample = downsample
self.stride = stride
self.dropout = nn.Dropout(p=0.5)
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.se(out)
out = self.dropout(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class SEBottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.se = SELayer(planes * 4, reduction)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out = self.se(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
def se_resnet18(num_classes=7):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [2, 2, 2, 2], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet34(num_classes=7):
"""Constructs a ResNet-34 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet50(num_classes=7, pretrained=False):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
if pretrained:
model.load_state_dict(load_state_dict_from_url(
"https://github.com/moskomule/senet.pytorch/releases/download/archive/seresnet50-60a8950a85b2b.pkl"))
return model
def se_resnet101(num_classes=7):
"""Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 23, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet152(num_classes=7):
"""Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 8, 36, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
###Output
_____no_output_____
###Markdown
**Import files**
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import torch
import torch.optim as optim
from torch.optim import lr_scheduler
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import argparse
from tensorboardX import SummaryWriter
###Output
_____no_output_____
###Markdown
Import dataset
###Code
import pickle
import random
from PIL import Image
print('==> Preparing data..')
cropped_image_list = []
label_list = []
img_path = './cropped/'
with open('result.pickle', 'rb') as f:
data = pickle.load(f)
transforms_train = transforms.Compose([
transforms.RandomCrop(112, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
transforms_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
#number of train set and test set
num_data=len(data.index)
num_test_data = int(num_data/10)
num_train_data = num_data - num_test_data
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
class_dict = {'JM': 0,'JN':1, 'JH':2, 'JK':3, 'RM':4, 'VV':5, 'SG':6 }
for i in data.index:
img = Image.open(img_path+data[0][i])
cropped_image_list.append(img)
label = class_dict[data[1][i]]
label_list.append(label)
#dataset making by random select
dataset_train = []
dataset_test = []
#random select test data index
rand = np.random.choice(np.arange(num_data), num_test_data, replace=False)
# class dataset() and __init__() looks good for calling and managing self variables
# but This time, tried to make simple without class
count_test=0
count_train=0
for i in np.arange(num_data):
if i in rand:
img = transforms_test(cropped_image_list[i])
dataset_test.append([img,label_list[i]])
else:
img = transforms_train(cropped_image_list[i])
dataset_train.append([img,label_list[i]])
#dataset_test = np.asarray(dataset_test)
#dataset_train = np.asarray(dataset_train)
###Output
_____no_output_____
###Markdown
Focal losshttps://github.com/foamliu/InsightFace-v2/blob/master/focal_loss.py
###Code
class FocalLoss(nn.Module):
def __init__(self, gamma=0):
super(FocalLoss, self).__init__()
self.gamma = gamma
self.ce = torch.nn.CrossEntropyLoss()
def forward(self, input, target):
logp = self.ce(input, target)
p = torch.exp(-logp)
loss = (1 - p) ** self.gamma * logp
return loss.mean()
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
exp_id=0
lr = 0.1
batch_size = 16
batch_size_test=2
num_worker=1
resume = None
logdir = './output/'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
train_loader = DataLoader(dataset_train, batch_size=batch_size,
shuffle=True, num_workers=num_worker)
test_loader = DataLoader(dataset_test, batch_size=batch_size_test,
shuffle=False, num_workers=num_worker)
# bts class
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
print('==> Making model..')
#net = resnet()
net = se_resnet18()
net = net.to(device)
num_params = sum(p.numel() for p in net.parameters() if p.requires_grad)
print('The number of parameters of model is', num_params)
# print(net)
if resume is not None:
checkpoint = torch.load('./save_model/' + str(exp_id))
net.load_state_dict(checkpoint['net'])
#criterion = nn.CrossEntropyLoss()
criterion = FocalLoss(gamma=2.0).to(device)
#criterion = nn.TripletMarginLoss(margin=1.0, p=2.0,
# eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')
optimizer = optim.Adam(net.parameters(), lr=0.001, weight_decay=1e-4)
decay_epoch = [2400, 3600]
step_lr_scheduler = lr_scheduler.MultiStepLR(optimizer,
milestones=decay_epoch, gamma=0.1)
writer = SummaryWriter(logdir)
def train(epoch, global_steps):
net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(train_loader):
global_steps += 1
step_lr_scheduler.step()
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('train epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(train_loader), train_loss/(batch_idx+1), acc))
writer.add_scalar('./log/train error', 100 - acc, global_steps)
return global_steps
def test(epoch, best_acc, global_steps):
net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('test epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(test_loader), test_loss/(batch_idx+1), acc))
writer.add_scalar('./log/test error', 100 - acc, global_steps)
if acc > best_acc:
print('==> Saving model..')
state = {
'net': net.state_dict(),
'acc': acc,
'epoch': epoch,
}
if not os.path.isdir('save_model'):
os.mkdir('save_model')
torch.save(state, './save_model/ckpt.pth')
best_acc = acc
return best_acc
if __name__=='__main__':
best_acc = 0
epoch = 0
global_steps = 0
if resume is not None:
test(epoch=0, best_acc=0)
else:
while True:
epoch += 1
global_steps = train(epoch, global_steps)
best_acc = test(epoch, best_acc, global_steps)
print('best test accuracy is ', best_acc)
if global_steps >= 4800:
break
# Any results you write to the current directory are saved as output.
###Output
_____no_output_____
###Markdown
Set resnet models
###Code
import torch.nn as nn
import torch.nn.functional as F
# code from https://github.com/KellerJordan/ResNet-PyTorch-CIFAR10/blob/master/model.py
class IdentityPadding(nn.Module):
def __init__(self, in_channels, out_channels, stride):
super(IdentityPadding, self).__init__()
self.pooling = nn.MaxPool2d(1, stride=stride)
self.add_channels = out_channels - in_channels
def forward(self, x):
out = F.pad(x, (0, 0, 0, 0, 0, self.add_channels))
out = self.pooling(out)
return out
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, down_sample=False):
super(ResidualBlock, self).__init__()
self.down_sample = down_sample
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
self.stride = stride
self.dropout = nn.Dropout(0.2)
if down_sample:
self.down_sample = IdentityPadding(in_channels, out_channels, stride)
else:
self.down_sample = None
def forward(self, x):
shortcut = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.dropout(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.dropout(out)
if self.down_sample is not None:
shortcut = self.down_sample(x)
out += shortcut
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, num_layers, block, num_classes=7):
super(ResNet, self).__init__()
self.num_layers = num_layers
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
# feature map size = 112x112x16
self.layers_2n = self.get_layers(block, 16, 16, stride=1)
# feature map size = 56x56x32
self.layers_4n = self.get_layers(block, 16, 32, stride=2)
# feature map size = 28x28x64
self.layers_6n = self.get_layers(block, 32, 64, stride=2)
# output layers
self.avg_pool = nn.MaxPool2d(28, stride=1)
self.fc_out = nn.Linear(64, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out',
nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def get_layers(self, block, in_channels, out_channels, stride):
if stride == 2:
down_sample = True
else:
down_sample = False
layers_list = nn.ModuleList(
[block(in_channels, out_channels, stride, down_sample)])
for _ in range(self.num_layers - 1):
layers_list.append(block(out_channels, out_channels))
return nn.Sequential(*layers_list)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.layers_2n(x)
x = self.layers_4n(x)
x = self.layers_6n(x)
x = self.avg_pool(x)
x = x.view(x.size(0), -1)
x = self.fc_out(x)
return x
def resnet():
block = ResidualBlock
# total number of layers if 6n + 2. if n is 5 then the depth of network is 32.
model = ResNet(3, block)
return model
###Output
_____no_output_____
###Markdown
SE Resnet from https://github.com/moskomule/senet.pytorch/blob/master/senet/se_resnet.py
###Code
from torch.hub import load_state_dict_from_url
from torchvision.models import ResNet
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction, bias=False),
nn.ReLU(inplace=True),
nn.Linear(channel // reduction, channel, bias=False),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
return x * y.expand_as(x)
def conv3x3(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
class SEBasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes, 1)
self.bn2 = nn.BatchNorm2d(planes)
self.se = SELayer(planes, reduction)
self.downsample = downsample
self.stride = stride
self.dropout = nn.Dropout(p=0.5)
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.se(out)
out = self.dropout(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class SEBottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None,
*, reduction=16):
super(SEBottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.se = SELayer(planes * 4, reduction)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out = self.se(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
def se_resnet18(num_classes=7):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [2, 2, 2, 2], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet34(num_classes=7):
"""Constructs a ResNet-34 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBasicBlock, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet50(num_classes=7, pretrained=False):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 6, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
if pretrained:
model.load_state_dict(load_state_dict_from_url(
"https://github.com/moskomule/senet.pytorch/releases/download/archive/seresnet50-60a8950a85b2b.pkl"))
return model
def se_resnet101(num_classes=7):
"""Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 4, 23, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
def se_resnet152(num_classes=7):
"""Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(SEBottleneck, [3, 8, 36, 3], num_classes=num_classes)
model.avgpool = nn.AdaptiveAvgPool2d(1)
return model
###Output
_____no_output_____
###Markdown
**Import files**
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import torch
import torch.optim as optim
from torch.optim import lr_scheduler
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import argparse
from tensorboardX import SummaryWriter
###Output
_____no_output_____
###Markdown
Import dataset
###Code
import pickle
import random
from PIL import Image
print('==> Preparing data..')
cropped_image_list = []
label_list = []
img_path = './cropped/'
with open('result.pickle', 'rb') as f:
data = pickle.load(f)
transforms_train = transforms.Compose([
transforms.RandomCrop(112, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
transforms_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
#number of train set and test set
num_data=len(data.index)
num_test_data = int(num_data/10)
num_train_data = num_data - num_test_data
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
class_dict = {'JM': 0,'JN':1, 'JH':2, 'JK':3, 'RM':4, 'VV':5, 'SG':6 }
for i in data.index:
img = Image.open(img_path+data[0][i])
cropped_image_list.append(img)
label = class_dict[data[1][i]]
label_list.append(label)
#dataset making by random select
dataset_train = []
dataset_test = []
#random select test data index
rand = np.random.choice(np.arange(num_data), num_test_data, replace=False)
# class dataset() and __init__() looks good for calling and managing self variables
# but This time, tried to make simple without class
count_test=0
count_train=0
for i in np.arange(num_data):
if i in rand:
img = transforms_test(cropped_image_list[i])
dataset_test.append([img,label_list[i]])
else:
img = transforms_train(cropped_image_list[i])
dataset_train.append([img,label_list[i]])
#dataset_test = np.asarray(dataset_test)
#dataset_train = np.asarray(dataset_train)
###Output
_____no_output_____
###Markdown
Focal losshttps://github.com/foamliu/InsightFace-v2/blob/master/focal_loss.py
###Code
class FocalLoss(nn.Module):
def __init__(self, gamma=0):
super(FocalLoss, self).__init__()
self.gamma = gamma
self.ce = torch.nn.CrossEntropyLoss()
def forward(self, input, target):
logp = self.ce(input, target)
p = torch.exp(-logp)
loss = (1 - p) ** self.gamma * logp
return loss.mean()
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
exp_id=0
lr = 0.1
batch_size = 16
batch_size_test=2
num_worker=1
resume = None
logdir = './output/'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
train_loader = DataLoader(dataset_train, batch_size=batch_size,
shuffle=True, num_workers=num_worker)
test_loader = DataLoader(dataset_test, batch_size=batch_size_test,
shuffle=False, num_workers=num_worker)
# bts class
classes = ('JM', 'JN', 'JH', 'JK', 'RM', 'VV', 'SG')
print('==> Making model..')
#net = resnet()
net = se_resnet18()
net = net.to(device)
num_params = sum(p.numel() for p in net.parameters() if p.requires_grad)
print('The number of parameters of model is', num_params)
# print(net)
if resume is not None:
checkpoint = torch.load('./save_model/' + str(exp_id))
net.load_state_dict(checkpoint['net'])
#criterion = nn.CrossEntropyLoss()
criterion = FocalLoss(gamma=2.0).to(device)
#criterion = nn.TripletMarginLoss(margin=1.0, p=2.0,
# eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')
optimizer = optim.Adam(net.parameters(), lr=0.001, weight_decay=1e-4)
decay_epoch = [2400, 3600]
step_lr_scheduler = lr_scheduler.MultiStepLR(optimizer,
milestones=decay_epoch, gamma=0.1)
writer = SummaryWriter(logdir)
def train(epoch, global_steps):
net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(train_loader):
global_steps += 1
step_lr_scheduler.step()
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('train epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(train_loader), train_loss/(batch_idx+1), acc))
writer.add_scalar('./log/train error', 100 - acc, global_steps)
return global_steps
def test(epoch, best_acc, global_steps):
net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs = inputs.to(device)
targets = targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100 * correct / total
print('test epoch : {} [{}/{}]| loss: {:.3f} | acc: {:.3f}'.format(
epoch, batch_idx, len(test_loader), test_loss/(batch_idx+1), acc))
writer.add_scalar('./log/test error', 100 - acc, global_steps)
if acc > best_acc:
print('==> Saving model..')
state = {
'net': net.state_dict(),
'acc': acc,
'epoch': epoch,
}
if not os.path.isdir('save_model'):
os.mkdir('save_model')
torch.save(state, './save_model/ckpt.pth')
best_acc = acc
return best_acc
if __name__=='__main__':
best_acc = 0
epoch = 0
global_steps = 0
if resume is not None:
test(epoch=0, best_acc=0)
else:
while True:
epoch += 1
global_steps = train(epoch, global_steps)
best_acc = test(epoch, best_acc, global_steps)
print('best test accuracy is ', best_acc)
if global_steps >= 4800:
break
# Any results you write to the current directory are saved as output.
###Output
_____no_output_____ |
analysis/wsc-spacy-filter-data.ipynb | ###Markdown
Filter Original Data
###Code
DATASET = 'wsc-spacy'
wd = os.path.dirname('__file__')
#f_name = f'{DATASET}_RESULTS.csv'
#results = pd.read_csv(os.path.join(f_name))
f_name_MCMLM_PSPAN = 'raw_results_mcmlm_pspan_wsccross_wscspacy.csv'
f_name_OTHERS = 'wsc-spacy.csv'
f_name_remaining_OTHERS = 'wsc_spacy_mcsent_psent_mcploss_mcpair_mcscale - Sheet1.csv'
results1 = pd.read_csv(os.path.join(wd, f_name_MCMLM_PSPAN))
#results2 = pd.read_csv(os.path.join(wd, f_name_OTHERS))
results3 = pd.read_csv(os.path.join(wd, f_name_remaining_OTHERS))
results = pd.concat([results1, results3]).reset_index()
print(list(results.columns.values))
print(f"\n {results.shape}")
framings = results['framing'].unique()
print(framings)
print(results.shape[0])
print(len(results['exp_name'].unique()))
print(list(results['learning_rate'].unique()))
print(list(results['max_epochs'].unique()))
print(list(results['batch_size'].unique()))
lr_c = list(results['learning_rate'].unique())[:3]
bs_c = list(results['batch_size'].unique())
ep_c = list(results['max_epochs'].unique())[:3]
data = 'wsc-spacy'
lr_c.append(0)
ep_c.append(0)
hp_space = pd.DataFrame({'learning_rate':lr_c, 'batch_size':bs_c, 'max_epochs':ep_c, 'dataset':data})
print(lr_c)
print(bs_c)
print(ep_c)
lr = 'learning_rate'
bs = 'batch_size'
ep = 'max_epochs'
d = 'dataset'
keep = []
frames = {key:0 for key in framings}
for label, row in results.iterrows():
if (row['dataset'] == data and
row['learning_rate'] in lr_c and
row['batch_size'] in bs_c and
row['max_epochs'] in ep_c):
keep.append(label)
frames[row['framing']] += 1
print(f"Kept {len(keep)}")
print(frames)
filtered_results = results.loc[keep,:].reset_index()
print(filtered_results.shape[0])
print(len(filtered_results['exp_name'].unique()))
###Output
420
420
###Markdown
Save
###Code
wd = os.path.dirname('__file__')
out_results = os.path.join(wd, f"{DATASET}_RESULTS.csv")
framing_counts = os.path.join(wd,f"{DATASET}_FRAMING_COUNTS.json")
hp_name = os.path.join(wd,f"{DATASET}_HP_SPACE.json")
filtered_results.to_csv(out_results)
hp_space.to_json(hp_name)
with open(framing_counts, 'w') as f:
f.write(json.dumps(frames))
###Output
_____no_output_____ |
docs/tutorials/03_minimum_eigen_optimizer.ipynb | ###Markdown
Minimum Eigen Optimizer Introduction An interesting class of optimization problems to be addressed by quantum computing are Quadratic Unconstrained Binary Optimization (QUBO) problems.Finding the solution to a QUBO is equivalent to finding the ground state of a corresponding Ising Hamiltonian, which is an important problem not only in optimization, but also in quantum chemistry and physics. For this translation, the binary variables taking values in $\{0, 1\}$ are replaced by spin variables taking values in $\{-1, +1\}$, which allows to replace the resulting spin variables by Pauli Z matrices, and thus, an Ising Hamiltonian. For more details on this mapping we refer to [1].Qiskit provides automatic conversion from a suitable `QuadraticProgram` to an Ising Hamiltonian, which then allows to leverage all the `MinimumEigenSolver` such as- `VQE`,- `QAOA`, or- `NumpyMinimumEigensolver` (classical exact method).Qiskit wraps the translation to an Ising Hamiltonian (in Qiskit Aqua also called `Operator`), the call to an `MinimumEigensolver` as well as the translation of the results back to `OptimizationResult` in the `MinimumEigenOptimizer`.In the following we first illustrate the conversion from a `QuadraticProgram` to an `Operator` and then show how to use the `MinimumEigenOptimizer` with different `MinimumEigensolver` to solve a given `QuadraticProgram`.The algorithms in Qiskit automatically try to convert a given problem to the supported problem class if possible, for instance, the `MinimumEigenOptimizer` will automatically translate integer variables to binary variables or add a linear equality constraints as a quadratic penalty term to the objective. It should be mentioned that Aqua will through a `QiskitOptimizationError` if conversion of a quadratic program with integer variable is attempted.The circuit depth of `QAOA` potentially has to be increased with the problem size, which might be prohibitive for near-term quantum devices.A possible workaround is Recursive QAOA, as introduced in [2].Qiskit generalizes this concept to the `RecursiveMinimumEigenOptimizer`, which is introduced at the end of this tutorial. References[1] [A. Lucas, *Ising formulations of many NP problems,* Front. Phys., 12 (2014).](https://arxiv.org/abs/1302.5843)[2] [S. Bravyi, A. Kliesch, R. Koenig, E. Tang, *Obstacles to State Preparation and Variational Optimization from Symmetry Protection,* arXiv preprint arXiv:1910.08980 (2019).](https://arxiv.org/abs/1910.08980) Converting a QUBO to an Operator
###Code
from qiskit import BasicAer
from qiskit.utils import algorithm_globals, QuantumInstance
from qiskit.algorithms import QAOA, NumPyMinimumEigensolver
from qiskit_optimization.algorithms import MinimumEigenOptimizer, RecursiveMinimumEigenOptimizer, SolutionSample, OptimizationResultStatus
from qiskit_optimization import QuadraticProgram
from qiskit.visualization import plot_histogram
from typing import List, Tuple
import numpy as np
# create a QUBO
qubo = QuadraticProgram()
qubo.binary_var('x')
qubo.binary_var('y')
qubo.binary_var('z')
qubo.minimize(linear=[1,-2,3], quadratic={('x', 'y'): 1, ('x', 'z'): -1, ('y', 'z'): 2})
print(qubo.export_as_lp_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: CPLEX
Minimize
obj: x - 2 y + 3 z + [ 2 x*y - 2 x*z + 4 y*z ]/2
Subject To
Bounds
0 <= x <= 1
0 <= y <= 1
0 <= z <= 1
Binaries
x y z
End
###Markdown
Next we translate this QUBO into an Ising operator. This results not only in an `Operator` but also in a constant offset to be taking into account to shift the resulting value.
###Code
op, offset = qubo.to_ising()
print('offset: {}'.format(offset))
print('operator:')
print(op)
###Output
offset: 1.5
operator:
-1.75 * ZII
+ 0.25 * IZI
+ 0.5 * ZZI
- 0.5 * IIZ
- 0.25 * ZIZ
+ 0.25 * IZZ
###Markdown
Sometimes an `QuadraticProgram` might also directly be given in the form of an `Operator`. For such cases, Qiskit also provides a converter from an `Operator` back to a `QuadraticProgram`, which we illustrate in the following.
###Code
qp=QuadraticProgram()
qp.from_ising(op, offset, linear=True)
print(qp.export_as_lp_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: CPLEX
Minimize
obj: x_0 - 2 x_1 + 3 x_2 + [ 2 x_0*x_1 - 2 x_0*x_2 + 4 x_1*x_2 ]/2
Subject To
Bounds
0 <= x_0 <= 1
0 <= x_1 <= 1
0 <= x_2 <= 1
Binaries
x_0 x_1 x_2
End
###Markdown
This converter allows, for instance, to translate an `Operator` to a `QuadraticProgram` and then solve the problem with other algorithms that are not based on the Ising Hamiltonian representation, such as the `GroverOptimizer`. Solving a QUBO with the MinimumEigenOptimizer We start by initializing the `MinimumEigensolver` we want to use.
###Code
algorithm_globals.random_seed = 10598
quantum_instance = QuantumInstance(BasicAer.get_backend('statevector_simulator'),
seed_simulator=algorithm_globals.random_seed,
seed_transpiler=algorithm_globals.random_seed)
qaoa_mes = QAOA(quantum_instance=quantum_instance, initial_point=[0., 0.])
exact_mes = NumPyMinimumEigensolver()
###Output
_____no_output_____
###Markdown
Then, we use the `MinimumEigensolver` to create `MinimumEigenOptimizer`.
###Code
qaoa = MinimumEigenOptimizer(qaoa_mes) # using QAOA
exact = MinimumEigenOptimizer(exact_mes) # using the exact classical numpy minimum eigen solver
###Output
_____no_output_____
###Markdown
We first use the `MinimumEigenOptimizer` based on the classical exact `NumPyMinimumEigensolver` to get the optimal benchmark solution for this small example.
###Code
exact_result = exact.solve(qubo)
print(exact_result)
###Output
optimal function value: -2.0
optimal value: [0. 1. 0.]
status: SUCCESS
###Markdown
Next we apply the `MinimumEigenOptimizer` based on `QAOA` to the same problem.
###Code
qaoa_result = qaoa.solve(qubo)
print(qaoa_result)
###Output
optimal function value: -2.0
optimal value: [0. 1. 0.]
status: SUCCESS
###Markdown
Analysis of Samples`OptimizationResult` provides a useful information source `SolutionSample` (here denoted as *samples*). They containinformation about input values `x`, objective function values `fval`, probability of obtaining that result `probability`and the solution status `status` (`SUCCESS`, `FAILURE`, `INFEASIBLE`).
###Code
print('variable order:', [var.name for var in qaoa_result.variables])
for s in qaoa_result.samples:
print(s)
###Output
variable order: ['x', 'y', 'z']
SolutionSample(x=array([0., 1., 0.]), fval=-2.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 0.]), fval=1.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 1., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 1.]), fval=4.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
###Markdown
We may also want to filter samples according to their status or probabilities.
###Code
def get_filtered_samples(samples: List[SolutionSample],
threshold: float = 0,
allowed_status: Tuple[OptimizationResultStatus] = (OptimizationResultStatus.SUCCESS,)):
res = []
for s in samples:
if s.status in allowed_status and s.probability > threshold:
res.append(s)
return res
filtered_samples = get_filtered_samples(qaoa_result.samples,
threshold=0.005,
allowed_status=(OptimizationResultStatus.SUCCESS,))
for s in filtered_samples:
print(s)
###Output
SolutionSample(x=array([0., 1., 0.]), fval=-2.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 0.]), fval=1.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 1., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 1.]), fval=4.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
###Markdown
If we want to obtain a better perspective of the results, statistics is very helpful, both with respect tothe objective function values and their respective probabilities. Thus, mean and standard deviation are the verybasics for understanding the results.
###Code
fvals = [s.fval for s in qaoa_result.samples]
probabilities = [s.probability for s in qaoa_result.samples]
np.mean(fvals)
np.std(fvals)
###Output
_____no_output_____
###Markdown
Finally, despite all number-crunching, visualization is usually the best early-analysis approach.
###Code
samples_for_plot = {' '.join(f'{qaoa_result.variables[i].name}={int(v)}'
for i, v in enumerate(s.x)): s.probability
for s in filtered_samples}
samples_for_plot
plot_histogram(samples_for_plot)
###Output
_____no_output_____
###Markdown
RecursiveMinimumEigenOptimizer The `RecursiveMinimumEigenOptimizer` takes a `MinimumEigenOptimizer` as input and applies the recursive optimization scheme to reduce the size of the problem one variable at a time.Once the size of the generated intermediate problem is below a given threshold (`min_num_vars`), the `RecursiveMinimumEigenOptimizer` uses another solver (`min_num_vars_optimizer`), e.g., an exact classical solver such as CPLEX or the `MinimumEigenOptimizer` based on the `NumPyMinimumEigensolver`.In the following, we show how to use the `RecursiveMinimumEigenOptimizer` using the two `MinimumEigenOptimizer` introduced before. First, we construct the `RecursiveMinimumEigenOptimizer` such that it reduces the problem size from 3 variables to 1 variable and then uses the exact solver for the last variable. Then we call `solve` to optimize the considered problem.
###Code
rqaoa = RecursiveMinimumEigenOptimizer(qaoa, min_num_vars=1, min_num_vars_optimizer=exact)
rqaoa_result = rqaoa.solve(qubo)
print(rqaoa_result)
filtered_samples = get_filtered_samples(rqaoa_result.samples,
threshold=0.005,
allowed_status=(OptimizationResultStatus.SUCCESS,))
samples_for_plot = {' '.join(f'{rqaoa_result.variables[i].name}={int(v)}'
for i, v in enumerate(s.x)): s.probability
for s in filtered_samples}
samples_for_plot
plot_histogram(samples_for_plot)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Minimum Eigen Optimizer Introduction An interesting class of optimization problems to be addressed by quantum computing are Quadratic Unconstrained Binary Optimization (QUBO) problems.Finding the solution to a QUBO is equivalent to finding the ground state of a corresponding Ising Hamiltonian, which is an important problem not only in optimization, but also in quantum chemistry and physics. For this translation, the binary variables taking values in $\{0, 1\}$ are replaced by spin variables taking values in $\{-1, +1\}$, which allows to replace the resulting spin variables by Pauli Z matrices, and thus, an Ising Hamiltonian. For more details on this mapping we refer to [1].Qiskit provides automatic conversion from a suitable `QuadraticProgram` to an Ising Hamiltonian, which then allows to leverage all the `MinimumEigenSolver` such as- `VQE`,- `QAOA`, or- `NumpyMinimumEigensolver` (classical exact method).Qiskit wraps the translation to an Ising Hamiltonian (in Qiskit Aqua also called `Operator`), the call to an `MinimumEigensolver` as well as the translation of the results back to `OptimizationResult` in the `MinimumEigenOptimizer`.In the following we first illustrate the conversion from a `QuadraticProgram` to an `Operator` and then show how to use the `MinimumEigenOptimizer` with different `MinimumEigensolver` to solve a given `QuadraticProgram`.The algorithms in Qiskit automatically try to convert a given problem to the supported problem class if possible, for instance, the `MinimumEigenOptimizer` will automatically translate integer variables to binary variables or add a linear equality constraints as a quadratic penalty term to the objective. It should be mentioned that Aqua will through a `QiskitOptimizationError` if conversion of a quadratic program with integer variable is attempted.The circuit depth of `QAOA` potentially has to be increased with the problem size, which might be prohibitive for near-term quantum devices.A possible workaround is Recursive QAOA, as introduced in [2].Qiskit generalizes this concept to the `RecursiveMinimumEigenOptimizer`, which is introduced at the end of this tutorial. References[1] [A. Lucas, *Ising formulations of many NP problems,* Front. Phys., 12 (2014).](https://arxiv.org/abs/1302.5843)[2] [S. Bravyi, A. Kliesch, R. Koenig, E. Tang, *Obstacles to State Preparation and Variational Optimization from Symmetry Protection,* arXiv preprint arXiv:1910.08980 (2019).](https://arxiv.org/abs/1910.08980) Converting a QUBO to an Operator
###Code
from qiskit import BasicAer
from qiskit.utils import algorithm_globals, QuantumInstance
from qiskit.algorithms import QAOA, NumPyMinimumEigensolver
from qiskit_optimization.algorithms import (
MinimumEigenOptimizer,
RecursiveMinimumEigenOptimizer,
SolutionSample,
OptimizationResultStatus,
)
from qiskit_optimization import QuadraticProgram
from qiskit.visualization import plot_histogram
from typing import List, Tuple
import numpy as np
# create a QUBO
qubo = QuadraticProgram()
qubo.binary_var("x")
qubo.binary_var("y")
qubo.binary_var("z")
qubo.minimize(linear=[1, -2, 3], quadratic={("x", "y"): 1, ("x", "z"): -1, ("y", "z"): 2})
print(qubo.export_as_lp_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: CPLEX
Minimize
obj: x - 2 y + 3 z + [ 2 x*y - 2 x*z + 4 y*z ]/2
Subject To
Bounds
0 <= x <= 1
0 <= y <= 1
0 <= z <= 1
Binaries
x y z
End
###Markdown
Next we translate this QUBO into an Ising operator. This results not only in an `Operator` but also in a constant offset to be taking into account to shift the resulting value.
###Code
op, offset = qubo.to_ising()
print("offset: {}".format(offset))
print("operator:")
print(op)
###Output
offset: 1.5
operator:
-1.75 * ZII
+ 0.25 * IZI
+ 0.5 * ZZI
- 0.5 * IIZ
- 0.25 * ZIZ
+ 0.25 * IZZ
###Markdown
Sometimes an `QuadraticProgram` might also directly be given in the form of an `Operator`. For such cases, Qiskit also provides a converter from an `Operator` back to a `QuadraticProgram`, which we illustrate in the following.
###Code
qp = QuadraticProgram()
qp.from_ising(op, offset, linear=True)
print(qp.export_as_lp_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: CPLEX
Minimize
obj: x_0 - 2 x_1 + 3 x_2 + [ 2 x_0*x_1 - 2 x_0*x_2 + 4 x_1*x_2 ]/2
Subject To
Bounds
0 <= x_0 <= 1
0 <= x_1 <= 1
0 <= x_2 <= 1
Binaries
x_0 x_1 x_2
End
###Markdown
This converter allows, for instance, to translate an `Operator` to a `QuadraticProgram` and then solve the problem with other algorithms that are not based on the Ising Hamiltonian representation, such as the `GroverOptimizer`. Solving a QUBO with the MinimumEigenOptimizer We start by initializing the `MinimumEigensolver` we want to use.
###Code
algorithm_globals.random_seed = 10598
quantum_instance = QuantumInstance(
BasicAer.get_backend("statevector_simulator"),
seed_simulator=algorithm_globals.random_seed,
seed_transpiler=algorithm_globals.random_seed,
)
qaoa_mes = QAOA(quantum_instance=quantum_instance, initial_point=[0.0, 0.0])
exact_mes = NumPyMinimumEigensolver()
###Output
_____no_output_____
###Markdown
Then, we use the `MinimumEigensolver` to create `MinimumEigenOptimizer`.
###Code
qaoa = MinimumEigenOptimizer(qaoa_mes) # using QAOA
exact = MinimumEigenOptimizer(exact_mes) # using the exact classical numpy minimum eigen solver
###Output
_____no_output_____
###Markdown
We first use the `MinimumEigenOptimizer` based on the classical exact `NumPyMinimumEigensolver` to get the optimal benchmark solution for this small example.
###Code
exact_result = exact.solve(qubo)
print(exact_result)
###Output
optimal function value: -2.0
optimal value: [0. 1. 0.]
status: SUCCESS
###Markdown
Next we apply the `MinimumEigenOptimizer` based on `QAOA` to the same problem.
###Code
qaoa_result = qaoa.solve(qubo)
print(qaoa_result)
###Output
optimal function value: -2.0
optimal value: [0. 1. 0.]
status: SUCCESS
###Markdown
Analysis of Samples`OptimizationResult` provides a useful information source `SolutionSample` (here denoted as *samples*). They containinformation about input values `x`, objective function values `fval`, probability of obtaining that result `probability`and the solution status `status` (`SUCCESS`, `FAILURE`, `INFEASIBLE`).
###Code
print("variable order:", [var.name for var in qaoa_result.variables])
for s in qaoa_result.samples:
print(s)
###Output
variable order: ['x', 'y', 'z']
SolutionSample(x=array([0., 1., 0.]), fval=-2.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 0.]), fval=1.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 1., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 1.]), fval=4.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
###Markdown
We may also want to filter samples according to their status or probabilities.
###Code
def get_filtered_samples(
samples: List[SolutionSample],
threshold: float = 0,
allowed_status: Tuple[OptimizationResultStatus] = (OptimizationResultStatus.SUCCESS,),
):
res = []
for s in samples:
if s.status in allowed_status and s.probability > threshold:
res.append(s)
return res
filtered_samples = get_filtered_samples(
qaoa_result.samples, threshold=0.005, allowed_status=(OptimizationResultStatus.SUCCESS,)
)
for s in filtered_samples:
print(s)
###Output
SolutionSample(x=array([0., 1., 0.]), fval=-2.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 0.]), fval=1.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 1., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 1.]), fval=4.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
###Markdown
If we want to obtain a better perspective of the results, statistics is very helpful, both with respect tothe objective function values and their respective probabilities. Thus, mean and standard deviation are the verybasics for understanding the results.
###Code
fvals = [s.fval for s in qaoa_result.samples]
probabilities = [s.probability for s in qaoa_result.samples]
np.mean(fvals)
np.std(fvals)
###Output
_____no_output_____
###Markdown
Finally, despite all number-crunching, visualization is usually the best early-analysis approach.
###Code
samples_for_plot = {
" ".join(f"{qaoa_result.variables[i].name}={int(v)}" for i, v in enumerate(s.x)): s.probability
for s in filtered_samples
}
samples_for_plot
plot_histogram(samples_for_plot)
###Output
_____no_output_____
###Markdown
RecursiveMinimumEigenOptimizer The `RecursiveMinimumEigenOptimizer` takes a `MinimumEigenOptimizer` as input and applies the recursive optimization scheme to reduce the size of the problem one variable at a time.Once the size of the generated intermediate problem is below a given threshold (`min_num_vars`), the `RecursiveMinimumEigenOptimizer` uses another solver (`min_num_vars_optimizer`), e.g., an exact classical solver such as CPLEX or the `MinimumEigenOptimizer` based on the `NumPyMinimumEigensolver`.In the following, we show how to use the `RecursiveMinimumEigenOptimizer` using the two `MinimumEigenOptimizer` introduced before. First, we construct the `RecursiveMinimumEigenOptimizer` such that it reduces the problem size from 3 variables to 1 variable and then uses the exact solver for the last variable. Then we call `solve` to optimize the considered problem.
###Code
rqaoa = RecursiveMinimumEigenOptimizer(qaoa, min_num_vars=1, min_num_vars_optimizer=exact)
rqaoa_result = rqaoa.solve(qubo)
print(rqaoa_result)
filtered_samples = get_filtered_samples(
rqaoa_result.samples, threshold=0.005, allowed_status=(OptimizationResultStatus.SUCCESS,)
)
samples_for_plot = {
" ".join(f"{rqaoa_result.variables[i].name}={int(v)}" for i, v in enumerate(s.x)): s.probability
for s in filtered_samples
}
samples_for_plot
plot_histogram(samples_for_plot)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Minimum Eigen Optimizer Introduction An interesting class of optimization problems to be addressed by quantum computing are Quadratic Unconstrained Binary Optimization (QUBO) problems.Finding the solution to a QUBO is equivalent to finding the ground state of a corresponding Ising Hamiltonian, which is an important problem not only in optimization, but also in quantum chemistry and physics. For this translation, the binary variables taking values in $\{0, 1\}$ are replaced by spin variables taking values in $\{-1, +1\}$, which allows one to replace the resulting spin variables by Pauli Z matrices, and thus, an Ising Hamiltonian. For more details on this mapping we refer to [1].Qiskit provides automatic conversion from a suitable `QuadraticProgram` to an Ising Hamiltonian, which then allows leveraging all the `MinimumEigenSolver` implementations, such as- `VQE`,- `QAOA`, or- `NumpyMinimumEigensolver` (classical exact method).Qiskit Optimization provides a the `MinimumEigenOptimizer` class, which wraps the translation to an Ising Hamiltonian (in Qiskit Terra also called `Operator`), the call to a `MinimumEigensolver`, and the translation of the results back to an `OptimizationResult`.In the following we first illustrate the conversion from a `QuadraticProgram` to an `Operator` and then show how to use the `MinimumEigenOptimizer` with different `MinimumEigensolver`s to solve a given `QuadraticProgram`.The algorithms in Qiskit automatically try to convert a given problem to the supported problem class if possible, for instance, the `MinimumEigenOptimizer` will automatically translate integer variables to binary variables or add linear equality constraints as a quadratic penalty term to the objective. It should be mentioned that a `QiskitOptimizationError` will be thrown if conversion of a quadratic program with integer variables is attempted.The circuit depth of `QAOA` potentially has to be increased with the problem size, which might be prohibitive for near-term quantum devices.A possible workaround is Recursive QAOA, as introduced in [2].Qiskit generalizes this concept to the `RecursiveMinimumEigenOptimizer`, which is introduced at the end of this tutorial. References[1] [A. Lucas, *Ising formulations of many NP problems,* Front. Phys., 12 (2014).](https://arxiv.org/abs/1302.5843)[2] [S. Bravyi, A. Kliesch, R. Koenig, E. Tang, *Obstacles to State Preparation and Variational Optimization from Symmetry Protection,* arXiv preprint arXiv:1910.08980 (2019).](https://arxiv.org/abs/1910.08980) Converting a QUBO to an Operator
###Code
from qiskit import BasicAer
from qiskit.utils import algorithm_globals, QuantumInstance
from qiskit.algorithms import QAOA, NumPyMinimumEigensolver
from qiskit_optimization.algorithms import (
MinimumEigenOptimizer,
RecursiveMinimumEigenOptimizer,
SolutionSample,
OptimizationResultStatus,
)
from qiskit_optimization import QuadraticProgram
from qiskit.visualization import plot_histogram
from typing import List, Tuple
import numpy as np
# create a QUBO
qubo = QuadraticProgram()
qubo.binary_var("x")
qubo.binary_var("y")
qubo.binary_var("z")
qubo.minimize(linear=[1, -2, 3], quadratic={("x", "y"): 1, ("x", "z"): -1, ("y", "z"): 2})
print(qubo.export_as_lp_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: CPLEX
Minimize
obj: x - 2 y + 3 z + [ 2 x*y - 2 x*z + 4 y*z ]/2
Subject To
Bounds
0 <= x <= 1
0 <= y <= 1
0 <= z <= 1
Binaries
x y z
End
###Markdown
Next we translate this QUBO into an Ising operator. This results not only in an `Operator` but also in a constant offset to be taken into account to shift the resulting value.
###Code
op, offset = qubo.to_ising()
print("offset: {}".format(offset))
print("operator:")
print(op)
###Output
offset: 1.5
operator:
-1.75 * ZII
+ 0.25 * IZI
+ 0.5 * ZZI
- 0.5 * IIZ
- 0.25 * ZIZ
+ 0.25 * IZZ
###Markdown
Sometimes a `QuadraticProgram` might also directly be given in the form of an `Operator`. For such cases, Qiskit also provides a translator from an `Operator` back to a `QuadraticProgram`, which we illustrate in the following.
###Code
qp = QuadraticProgram()
qp.from_ising(op, offset, linear=True)
print(qp.export_as_lp_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: CPLEX
Minimize
obj: x_0 - 2 x_1 + 3 x_2 + [ 2 x_0*x_1 - 2 x_0*x_2 + 4 x_1*x_2 ]/2
Subject To
Bounds
0 <= x_0 <= 1
0 <= x_1 <= 1
0 <= x_2 <= 1
Binaries
x_0 x_1 x_2
End
###Markdown
This translator allows, for instance, one to translate an `Operator` to a `QuadraticProgram` and then solve the problem with other algorithms that are not based on the Ising Hamiltonian representation, such as the `GroverOptimizer`. Solving a QUBO with the MinimumEigenOptimizer We start by initializing the `MinimumEigensolver` we want to use.
###Code
algorithm_globals.random_seed = 10598
quantum_instance = QuantumInstance(
BasicAer.get_backend("statevector_simulator"),
seed_simulator=algorithm_globals.random_seed,
seed_transpiler=algorithm_globals.random_seed,
)
qaoa_mes = QAOA(quantum_instance=quantum_instance, initial_point=[0.0, 0.0])
exact_mes = NumPyMinimumEigensolver()
###Output
_____no_output_____
###Markdown
Then, we use the `MinimumEigensolver` to create `MinimumEigenOptimizer`.
###Code
qaoa = MinimumEigenOptimizer(qaoa_mes) # using QAOA
exact = MinimumEigenOptimizer(exact_mes) # using the exact classical numpy minimum eigen solver
###Output
_____no_output_____
###Markdown
We first use the `MinimumEigenOptimizer` based on the classical exact `NumPyMinimumEigensolver` to get the optimal benchmark solution for this small example.
###Code
exact_result = exact.solve(qubo)
print(exact_result)
###Output
optimal function value: -2.0
optimal value: [0. 1. 0.]
status: SUCCESS
###Markdown
Next we apply the `MinimumEigenOptimizer` based on `QAOA` to the same problem.
###Code
qaoa_result = qaoa.solve(qubo)
print(qaoa_result)
###Output
optimal function value: -2.0
optimal value: [0. 1. 0.]
status: SUCCESS
###Markdown
Analysis of Samples`OptimizationResult` provides useful information in the form of `SolutionSample`s (here denoted as *samples*). Each `SolutionSample` containsinformation about the input values (`x`), the corresponding objective function value (`fval`), the fraction of samples corresponding to that input (`probability`),and the solution `status` (`SUCCESS`, `FAILURE`, `INFEASIBLE`). Multiple samples corresponding to the same input are consolidated into a single `SolutionSample` (with its `probability` attribute being the aggregate fraction of samples represented by that `SolutionSample`).
###Code
print("variable order:", [var.name for var in qaoa_result.variables])
for s in qaoa_result.samples:
print(s)
###Output
variable order: ['x', 'y', 'z']
SolutionSample(x=array([0., 1., 0.]), fval=-2.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 0.]), fval=1.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 1., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 1.]), fval=4.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
###Markdown
We may also want to filter samples according to their status or probabilities.
###Code
def get_filtered_samples(
samples: List[SolutionSample],
threshold: float = 0,
allowed_status: Tuple[OptimizationResultStatus] = (OptimizationResultStatus.SUCCESS,),
):
res = []
for s in samples:
if s.status in allowed_status and s.probability > threshold:
res.append(s)
return res
filtered_samples = get_filtered_samples(
qaoa_result.samples, threshold=0.005, allowed_status=(OptimizationResultStatus.SUCCESS,)
)
for s in filtered_samples:
print(s)
###Output
SolutionSample(x=array([0., 1., 0.]), fval=-2.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 0.]), fval=1.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 1., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 1.]), fval=4.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
###Markdown
If we want to obtain a better perspective of the results, statistics is very helpful, both with respect tothe objective function values and their respective probabilities. Thus, mean and standard deviation are the verybasics for understanding the results.
###Code
fvals = [s.fval for s in qaoa_result.samples]
probabilities = [s.probability for s in qaoa_result.samples]
np.mean(fvals)
np.std(fvals)
###Output
_____no_output_____
###Markdown
Finally, despite all the number-crunching, visualization is usually the best early-analysis approach.
###Code
samples_for_plot = {
" ".join(f"{qaoa_result.variables[i].name}={int(v)}" for i, v in enumerate(s.x)): s.probability
for s in filtered_samples
}
samples_for_plot
plot_histogram(samples_for_plot)
###Output
_____no_output_____
###Markdown
RecursiveMinimumEigenOptimizer The `RecursiveMinimumEigenOptimizer` takes a `MinimumEigenOptimizer` as input and applies the recursive optimization scheme to reduce the size of the problem one variable at a time.Once the size of the generated intermediate problem is below a given threshold (`min_num_vars`), the `RecursiveMinimumEigenOptimizer` uses another solver (`min_num_vars_optimizer`), e.g., an exact classical solver such as CPLEX or the `MinimumEigenOptimizer` based on the `NumPyMinimumEigensolver`.In the following, we show how to use the `RecursiveMinimumEigenOptimizer` using the two `MinimumEigenOptimizer`s introduced before. First, we construct the `RecursiveMinimumEigenOptimizer` such that it reduces the problem size from 3 variables to 1 variable and then uses the exact solver for the last variable. Then we call `solve` to optimize the considered problem.
###Code
rqaoa = RecursiveMinimumEigenOptimizer(qaoa, min_num_vars=1, min_num_vars_optimizer=exact)
rqaoa_result = rqaoa.solve(qubo)
print(rqaoa_result)
filtered_samples = get_filtered_samples(
rqaoa_result.samples, threshold=0.005, allowed_status=(OptimizationResultStatus.SUCCESS,)
)
samples_for_plot = {
" ".join(f"{rqaoa_result.variables[i].name}={int(v)}" for i, v in enumerate(s.x)): s.probability
for s in filtered_samples
}
samples_for_plot
plot_histogram(samples_for_plot)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Minimum Eigen Optimizer Introduction An interesting class of optimization problems to be addressed by quantum computing are Quadratic Unconstrained Binary Optimization (QUBO) problems.Finding the solution to a QUBO is equivalent to finding the ground state of a corresponding Ising Hamiltonian, which is an important problem not only in optimization, but also in quantum chemistry and physics. For this translation, the binary variables taking values in $\{0, 1\}$ are replaced by spin variables taking values in $\{-1, +1\}$, which allows to replace the resulting spin variables by Pauli Z matrices, and thus, an Ising Hamiltonian. For more details on this mapping we refer to [1].Qiskit provides automatic conversion from a suitable `QuadraticProgram` to an Ising Hamiltonian, which then allows to leverage all the `MinimumEigenSolver` such as- `VQE`,- `QAOA`, or- `NumpyMinimumEigensolver` (classical exact method).Qiskit wraps the translation to an Ising Hamiltonian (in Qiskit Aqua also called `Operator`), the call to an `MinimumEigensolver` as well as the translation of the results back to `OptimizationResult` in the `MinimumEigenOptimizer`.In the following we first illustrate the conversion from a `QuadraticProgram` to an `Operator` and then show how to use the `MinimumEigenOptimizer` with different `MinimumEigensolver` to solve a given `QuadraticProgram`.The algorithms in Qiskit automatically try to convert a given problem to the supported problem class if possible, for instance, the `MinimumEigenOptimizer` will automatically translate integer variables to binary variables or add a linear equality constraints as a quadratic penalty term to the objective. It should be mentioned that Aqua will through a `QiskitOptimizationError` if conversion of a quadratic program with integer variable is attempted.The circuit depth of `QAOA` potentially has to be increased with the problem size, which might be prohibitive for near-term quantum devices.A possible workaround is Recursive QAOA, as introduced in [2].Qiskit generalizes this concept to the `RecursiveMinimumEigenOptimizer`, which is introduced at the end of this tutorial. References[1] [A. Lucas, *Ising formulations of many NP problems,* Front. Phys., 12 (2014).](https://arxiv.org/abs/1302.5843)[2] [S. Bravyi, A. Kliesch, R. Koenig, E. Tang, *Obstacles to State Preparation and Variational Optimization from Symmetry Protection,* arXiv preprint arXiv:1910.08980 (2019).](https://arxiv.org/abs/1910.08980) Converting a QUBO to an Operator
###Code
from qiskit import BasicAer
from qiskit.utils import algorithm_globals, QuantumInstance
from qiskit.algorithms import QAOA, NumPyMinimumEigensolver
from qiskit_optimization.algorithms import MinimumEigenOptimizer, RecursiveMinimumEigenOptimizer
from qiskit_optimization import QuadraticProgram
# create a QUBO
qubo = QuadraticProgram()
qubo.binary_var('x')
qubo.binary_var('y')
qubo.binary_var('z')
qubo.minimize(linear=[1,-2,3], quadratic={('x', 'y'): 1, ('x', 'z'): -1, ('y', 'z'): 2})
print(qubo.export_as_lp_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: CPLEX
Minimize
obj: x - 2 y + 3 z + [ 2 x*y - 2 x*z + 4 y*z ]/2
Subject To
Bounds
0 <= x <= 1
0 <= y <= 1
0 <= z <= 1
Binaries
x y z
End
###Markdown
Next we translate this QUBO into an Ising operator. This results not only in an `Operator` but also in a constant offset to be taking into account to shift the resulting value.
###Code
op, offset = qubo.to_ising()
print('offset: {}'.format(offset))
print('operator:')
print(op)
###Output
offset: 1.5
operator:
-1.75 * ZII
+ 0.25 * IZI
+ 0.5 * ZZI
- 0.5 * IIZ
- 0.25 * ZIZ
+ 0.25 * IZZ
###Markdown
Sometimes an `QuadraticProgram` might also directly be given in the form of an `Operator`. For such cases, Qiskit also provides a converter from an `Operator` back to a `QuadraticProgram`, which we illustrate in the following.
###Code
qp=QuadraticProgram()
qp.from_ising(op, offset, linear=True)
print(qp.export_as_lp_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: CPLEX
Minimize
obj: x_0 - 2 x_1 + 3 x_2 + [ 2 x_0*x_1 - 2 x_0*x_2 + 4 x_1*x_2 ]/2
Subject To
Bounds
0 <= x_0 <= 1
0 <= x_1 <= 1
0 <= x_2 <= 1
Binaries
x_0 x_1 x_2
End
###Markdown
This converter allows, for instance, to translate an `Operator` to a `QuadraticProgram` and then solve the problem with other algorithms that are not based on the Ising Hamiltonian representation, such as the `GroverOptimizer`. Solving a QUBO with the MinimumEigenOptimizer We start by initializing the `MinimumEigensolver` we want to use.
###Code
algorithm_globals.random_seed = 10598
quantum_instance = QuantumInstance(BasicAer.get_backend('statevector_simulator'),
seed_simulator=algorithm_globals.random_seed,
seed_transpiler=algorithm_globals.random_seed)
qaoa_mes = QAOA(quantum_instance=quantum_instance, initial_point=[0., 0.])
exact_mes = NumPyMinimumEigensolver()
###Output
_____no_output_____
###Markdown
Then, we use the `MinimumEigensolver` to create `MinimumEigenOptimizer`.
###Code
qaoa = MinimumEigenOptimizer(qaoa_mes) # using QAOA
exact = MinimumEigenOptimizer(exact_mes) # using the exact classical numpy minimum eigen solver
###Output
_____no_output_____
###Markdown
We first use the `MinimumEigenOptimizer` based on the classical exact `NumPyMinimumEigensolver` to get the optimal benchmark solution for this small example.
###Code
exact_result = exact.solve(qubo)
print(exact_result)
###Output
optimal function value: -2.0
optimal value: [0. 1. 0.]
status: SUCCESS
###Markdown
Next we apply the `MinimumEigenOptimizer` based on `QAOA` to the same problem.
###Code
qaoa_result = qaoa.solve(qubo)
print(qaoa_result)
###Output
optimal function value: -2.0
optimal value: [0. 1. 0.]
status: SUCCESS
###Markdown
RecursiveMinimumEigenOptimizer The `RecursiveMinimumEigenOptimizer` takes a `MinimumEigenOptimizer` as input and applies the recursive optimization scheme to reduce the size of the problem one variable at a time.Once the size of the generated intermediate problem is below a given threshold (`min_num_vars`), the `RecursiveMinimumEigenOptimizer` uses another solver (`min_num_vars_optimizer`), e.g., an exact classical solver such as CPLEX or the `MinimumEigenOptimizer` based on the `NumPyMinimumEigensolver`.In the following, we show how to use the `RecursiveMinimumEigenOptimizer` using the two `MinimumEigenOptimizer` introduced before. First, we construct the `RecursiveMinimumEigenOptimizer` such that it reduces the problem size from 3 variables to 1 variable and then uses the exact solver for the last variable. Then we call `solve` to optimize the considered problem.
###Code
rqaoa = RecursiveMinimumEigenOptimizer(qaoa, min_num_vars=1, min_num_vars_optimizer=exact)
rqaoa_result = rqaoa.solve(qubo)
print(rqaoa_result)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Minimum Eigen Optimizer Introduction An interesting class of optimization problems to be addressed by quantum computing are Quadratic Unconstrained Binary Optimization (QUBO) problems.Finding the solution to a QUBO is equivalent to finding the ground state of a corresponding Ising Hamiltonian, which is an important problem not only in optimization, but also in quantum chemistry and physics. For this translation, the binary variables taking values in $\{0, 1\}$ are replaced by spin variables taking values in $\{-1, +1\}$, which allows to replace the resulting spin variables by Pauli Z matrices, and thus, an Ising Hamiltonian. For more details on this mapping we refer to [1].Qiskit provides automatic conversion from a suitable `QuadraticProgram` to an Ising Hamiltonian, which then allows to leverage all the `MinimumEigenSolver` such as- `VQE`,- `QAOA`, or- `NumpyMinimumEigensolver` (classical exact method).Qiskit wraps the translation to an Ising Hamiltonian (in Qiskit Aqua also called `Operator`), the call to an `MinimumEigensolver` as well as the translation of the results back to `OptimizationResult` in the `MinimumEigenOptimizer`.In the following we first illustrate the conversion from a `QuadraticProgram` to an `Operator` and then show how to use the `MinimumEigenOptimizer` with different `MinimumEigensolver` to solve a given `QuadraticProgram`.The algorithms in Qiskit automatically try to convert a given problem to the supported problem class if possible, for instance, the `MinimumEigenOptimizer` will automatically translate integer variables to binary variables or add a linear equality constraints as a quadratic penalty term to the objective. It should be mentioned that Aqua will through a `QiskitOptimizationError` if conversion of a quadratic program with integer variable is attempted.The circuit depth of `QAOA` potentially has to be increased with the problem size, which might be prohibitive for near-term quantum devices.A possible workaround is Recursive QAOA, as introduced in [2].Qiskit generalizes this concept to the `RecursiveMinimumEigenOptimizer`, which is introduced at the end of this tutorial. References[1] [A. Lucas, *Ising formulations of many NP problems,* Front. Phys., 12 (2014).](https://arxiv.org/abs/1302.5843)[2] [S. Bravyi, A. Kliesch, R. Koenig, E. Tang, *Obstacles to State Preparation and Variational Optimization from Symmetry Protection,* arXiv preprint arXiv:1910.08980 (2019).](https://arxiv.org/abs/1910.08980) Converting a QUBO to an Operator
###Code
from qiskit import BasicAer
from qiskit.utils import algorithm_globals, QuantumInstance
from qiskit.algorithms import QAOA, NumPyMinimumEigensolver
from qiskit_optimization.algorithms import MinimumEigenOptimizer, RecursiveMinimumEigenOptimizer, SolutionSample, OptimizationResultStatus
from qiskit_optimization import QuadraticProgram
from qiskit.visualization import plot_histogram
from typing import List, Tuple
import numpy as np
# create a QUBO
qubo = QuadraticProgram()
qubo.binary_var('x')
qubo.binary_var('y')
qubo.binary_var('z')
qubo.minimize(linear=[1,-2,3], quadratic={('x', 'y'): 1, ('x', 'z'): -1, ('y', 'z'): 2})
print(qubo.export_as_lp_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: CPLEX
Minimize
obj: x - 2 y + 3 z + [ 2 x*y - 2 x*z + 4 y*z ]/2
Subject To
Bounds
0 <= x <= 1
0 <= y <= 1
0 <= z <= 1
Binaries
x y z
End
###Markdown
Next we translate this QUBO into an Ising operator. This results not only in an `Operator` but also in a constant offset to be taking into account to shift the resulting value.
###Code
op, offset = qubo.to_ising()
print('offset: {}'.format(offset))
print('operator:')
print(op)
###Output
offset: 1.5
operator:
-1.75 * ZII
+ 0.25 * IZI
+ 0.5 * ZZI
- 0.5 * IIZ
- 0.25 * ZIZ
+ 0.25 * IZZ
###Markdown
Sometimes an `QuadraticProgram` might also directly be given in the form of an `Operator`. For such cases, Qiskit also provides a converter from an `Operator` back to a `QuadraticProgram`, which we illustrate in the following.
###Code
qp=QuadraticProgram()
qp.from_ising(op, offset, linear=True)
print(qp.export_as_lp_string())
###Output
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: CPLEX
Minimize
obj: x_0 - 2 x_1 + 3 x_2 + [ 2 x_0*x_1 - 2 x_0*x_2 + 4 x_1*x_2 ]/2
Subject To
Bounds
0 <= x_0 <= 1
0 <= x_1 <= 1
0 <= x_2 <= 1
Binaries
x_0 x_1 x_2
End
###Markdown
This converter allows, for instance, to translate an `Operator` to a `QuadraticProgram` and then solve the problem with other algorithms that are not based on the Ising Hamiltonian representation, such as the `GroverOptimizer`. Solving a QUBO with the MinimumEigenOptimizer We start by initializing the `MinimumEigensolver` we want to use.
###Code
algorithm_globals.random_seed = 10598
quantum_instance = QuantumInstance(BasicAer.get_backend('statevector_simulator'),
seed_simulator=algorithm_globals.random_seed,
seed_transpiler=algorithm_globals.random_seed)
qaoa_mes = QAOA(quantum_instance=quantum_instance, initial_point=[0., 0.])
exact_mes = NumPyMinimumEigensolver()
###Output
_____no_output_____
###Markdown
Then, we use the `MinimumEigensolver` to create `MinimumEigenOptimizer`.
###Code
qaoa = MinimumEigenOptimizer(qaoa_mes) # using QAOA
exact = MinimumEigenOptimizer(exact_mes) # using the exact classical numpy minimum eigen solver
###Output
_____no_output_____
###Markdown
We first use the `MinimumEigenOptimizer` based on the classical exact `NumPyMinimumEigensolver` to get the optimal benchmark solution for this small example.
###Code
exact_result = exact.solve(qubo)
print(exact_result)
###Output
optimal function value: -2.0
optimal value: [0. 1. 0.]
status: SUCCESS
###Markdown
Next we apply the `MinimumEigenOptimizer` based on `QAOA` to the same problem.
###Code
qaoa_result = qaoa.solve(qubo)
print(qaoa_result)
###Output
optimal function value: -2.0
optimal value: [0. 1. 0.]
status: SUCCESS
###Markdown
Analysis of Samples`OptimizationResult` provides a useful information source `SolutionSample` (here denoted as *samples*). They containinformation about input values `x`, objective function values `fval`, probability of obtaining that result `probability`and the solution status `status` (`SUCCESS`, `FAILURE`, `INFEASIBLE`).
###Code
print('variable order:', [var.name for var in qaoa_result.variables])
for s in qaoa_result.samples:
print(s)
###Output
variable order: ['x', 'y', 'z']
SolutionSample(x=array([0., 1., 0.]), fval=-2.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 0.]), fval=1.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 1., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 1.]), fval=4.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
###Markdown
We may also want to filter samples according to their status or probabilities.
###Code
def get_filtered_samples(samples: List[SolutionSample],
threshold: float = 0,
allowed_status: Tuple[OptimizationResultStatus] = (OptimizationResultStatus.SUCCESS,)):
res = []
for s in samples:
if s.status in allowed_status and s.probability > threshold:
res.append(s)
return res
filtered_samples = get_filtered_samples(qaoa_result.samples,
threshold=0.005,
allowed_status=(OptimizationResultStatus.SUCCESS,))
for s in filtered_samples:
print(s)
###Output
SolutionSample(x=array([0., 1., 0.]), fval=-2.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 0.]), fval=0.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 0.]), fval=1.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 0., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([0., 1., 1.]), fval=3.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
SolutionSample(x=array([1., 1., 1.]), fval=4.0, probability=0.12499999999999994, status=<OptimizationResultStatus.SUCCESS: 0>)
###Markdown
If we want to obtain a better perspective of the results, statistics is very helpful, both with respect tothe objective function values and their respective probabilities. Thus, mean and standard deviation are the verybasics for understanding the results.
###Code
fvals = [s.fval for s in qaoa_result.samples]
probabilities = [s.probability for s in qaoa_result.samples]
np.mean(fvals)
np.std(fvals)
###Output
_____no_output_____
###Markdown
Finally, despite all number-crunching, visualization is usually the best early-analysis approach.
###Code
samples_for_plot = {' '.join(f'{qaoa_result.variables[i].name}={int(v)}'
for i, v in enumerate(s.x)): s.probability
for s in filtered_samples}
samples_for_plot
plot_histogram(samples_for_plot)
###Output
_____no_output_____
###Markdown
RecursiveMinimumEigenOptimizer The `RecursiveMinimumEigenOptimizer` takes a `MinimumEigenOptimizer` as input and applies the recursive optimization scheme to reduce the size of the problem one variable at a time.Once the size of the generated intermediate problem is below a given threshold (`min_num_vars`), the `RecursiveMinimumEigenOptimizer` uses another solver (`min_num_vars_optimizer`), e.g., an exact classical solver such as CPLEX or the `MinimumEigenOptimizer` based on the `NumPyMinimumEigensolver`.In the following, we show how to use the `RecursiveMinimumEigenOptimizer` using the two `MinimumEigenOptimizer` introduced before. First, we construct the `RecursiveMinimumEigenOptimizer` such that it reduces the problem size from 3 variables to 1 variable and then uses the exact solver for the last variable. Then we call `solve` to optimize the considered problem.
###Code
rqaoa = RecursiveMinimumEigenOptimizer(qaoa, min_num_vars=1, min_num_vars_optimizer=exact)
rqaoa_result = rqaoa.solve(qubo)
print(rqaoa_result)
filtered_samples = get_filtered_samples(rqaoa_result.samples,
threshold=0.005,
allowed_status=(OptimizationResultStatus.SUCCESS,))
samples_for_plot = {' '.join(f'{rqaoa_result.variables[i].name}={int(v)}'
for i, v in enumerate(s.x)): s.probability
for s in filtered_samples}
samples_for_plot
plot_histogram(samples_for_plot)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____ |